Testing High-Speed Memories


High-speed Flash memory and solidstate drives (SSDs) have become a staple diet of all consumer electronics devices from smartphones to laptop computers. It is almost impossible to imagine a gadget without high-speed storage. Thus it is obvious that a vital part of the production and testing of these devices is to ensure that their memory matches in speed with the rest of the device

Ashwin Gopinath


Arecent survey by the International Electronics Manufacturing Initiative (iNEMI) asked test engineers in the electronics industry what their biggest problems were while testing circuit boards.

Of the eleven possible problems listed, characterising and testing memories soldered to circuit boards was among the top three. Memory test topped the list of prevalent problems along with ‘lack of access to test points’ and ‘the need to perform debug/diagnostics on board failures.’ Clearly, the ability to thoroughly characterise, test and diagnose problems with soldered-down memories is one of the most pressing problems in the industry.

When memory speeds were not as high as today’s communication protocols’ and over memory buses were not as complex, performing static shorts and opens testing on memory interconnects might have sufficed. Today, signal propagation through passive devices such as capacitors and signal integrity on high-speed traces to memory must be validated and characterised for an open data window. Often this data window demonstrates sensitivities to clock jitter, temperature, electrical noise as well as the level and stability of the voltage.

One of the several factors that have exacerbated the difficulties in memory testing is the complexities of these buses. Many of the prominent memory buses, like the various generations of double data rate (DDR) memory, have achieved extremely high data transfer rates at the expense of simplicity. Indeed, the difficulties in testing memory buses have only been exasperated by each successive generation’s high transfer speeds.

Under the hood of memory testing
All memory tests are based on writing to and reading from the memory. Test reads and writes conducted from the board’s functional or normal memory access unit in a normal operating manner can be referred to as functional memory tests. Usually, functional memory tests can prove out the design or functionality of a given memory architecture, but these may not be able to detect or even diagnose manufacturing or assembly defects.

Most manufacturing memory tests are algorithmic and structural pseudo-functional tests. The most prevalent form of an algorithmic pseudo-functional memory test is memory built-in self-test (MBIST). MBIST implies that the test is implemented on-silicon or provided by an algorithmic test instrument embedded within a memory test system.

Since memory testing is an altogether different field of operation, parameters tested and methods for testing vary a great deal.

Explaining the memory test, Abhay Samant, marketing manager, National Instruments, says, “A memory test consists of testing the various components of the memory, including testing for functional performance of the address and data lines, and timing tests. The lines are tested using various patterns such as marching ones or pseudo-random values. The values are then read back and compared against expected data for errors. For conducting functional tests on the memory tester, all lines on the memory are tested with sequences such as moving ones, random moving inversion bits, or a random number sequence.”

Sanchit Bhatia, digital applications specialist, Agilent Technologies, adds, “DDR is the most popular high-speed memory standard. It is defined by the Joint Electronic Devices Engineering Council (JEDEC) specifications. The JEDEC specifications have strict jitter requirements for the clock and strict signal integrity requirements for all signals. To qualify, the clock must pass a long list of jitter tests and data signals should pass a long list of timing tests. With automated DDR software, jitter measurements can be carried out more efficiently and effectively.”

Roy Chestnut, product line manager at Teledyne Lecroy, chimes in, “DDR signalling is a parallel data bus that requires validation of a variety of DDR-specific physical-layer measurements (slew rate, amplitude, overshoot, etc) that are closely defined by the JEDEC standards. These parametric measurements are not standard in oscilloscopes, and it would take users a long time to set up the oscilloscope to make them. Additionally, parallel nature of the DDR bus requires close timing coordination of the various clock, strobe and data signalling. These timing measurements in a variety of test conditions are also automated. Lastly, a variety of fairly standard clock jitter tests are also automated. We have products that focus on command and address timing between the system memory controller and the memory.”

In the broadest sense, memory testing takes place over the entire life-cycle of a system, beginning with board development or design, moving into production and culminating in post-production stages such as field service. The cycle then repeats when a memory test is performed during the next generation of board design. During each phase in the life cycle, the objectives and goals of memory test differ and the memory test process itself is typically referred to differently, according to the objectives of that particular phase.

The different memory tests

Some of the other tests performed are:
Address lines test. It tests the functionality and connection of address lines. The memory is initialised with all 1’s, after which a ‘0’ is written to all the bits of the address. One of the lines is then changed to ‘1’ and using a high-speed digital I/O board (HSDIO), all the lines are read back. The expected value is all 1’s. This test is then repeated for all address lines.
Data lines test. In this test, each bit of memory is tested to see whether it can store and return correct values. Once again, all the bits are initialised to ‘1.’ The first bit of an address is changed to ‘0.’ Using the HSDIO board, the bit is acquired and verified as a ‘0.’ This test is then rotated through each bit of the address and repeated for all memory locations.
Self address test. This test writes the value of its own address to each location. Using an HSDIO board, each address is read and checked for values of the data stored. This test is then repeated several times.
Moving inversion one’s. In this test, an all 1’s moving algorithm is written to the memory and read back. The memory is initialised to all 1’s. It is then filled with all 0’s from top to bottom verifying each location. The HSDIO board is used to verify the locations. Then the memory is filled with all 1’s. The values are read back using the HSDIO board and all the data is verified.
Random moving inversion. Similar to moving inversion one’s test, in this test the memory is tested with a moving inversions algorithm using random data. The memory is initialised to all 1’s. Random values are written to the memory, from top to bottom, and then verified.
Random number sequence. In this test, random values are written to and read from the memory. This procedure is repeated for several locations.

—Information provided by Abhay Samanth of NI

Al Crouch, chief technologist at ASSET InterTech, explains in his whitepaper, “During design and new product introduction, testing memory in a timely fashion is particularly critical if the new system is to be delivered promptly to the marketplace. During the initial phase of board bring-up when first prototypes are received, memory tests are performed to identify the root causes of failures or faults in the design of the memory architecture so that these can be quickly corrected prior to the design’s release to production. Meanwhile, performance of the memory architecture is characterised to determine whether the design meets or exceeds its performance specifications.”

Bhatia says, “The JEDEC specification requires compliance at the hard-to-reach fine ball-grid array (FBGA) package ballout on the dynamic random-access memory (DRAM) chip. Due to difficulty in probing at BGA pins, engineers tend to probe at other locations such as the signal trace or surface-mount components like termination resistors and capacitors. Although this may seem straightforward, signal integrity could be compromised by probing here. First, probing at these locations often causes signal reflection, resulting in non-monotonic edges, overshoot, ringing and other issues. Rather than true signal performance, you see a signal that includes the effect of reflection at components. This can cause errors on slew-rate and setup and hold-time measurements.”

“To address this probing challenge and to optimise DDR probing, Agilent has designed specialised tools. One such tool is BGA probe adaptor—a thin fixture that can be attached between the DRAM chip and the circuit board with a compatible footprint on the top and bottom side. The signals at the DRAM ballout are then routed to the top side of the BGA probe adaptor, so the oscilloscope and logic analyser probes can access them. This method provides a direct signal access point to the DRAM ballout for true compliance with the DDR specification. Since it’s compatible with oscilloscopes and logic analysers, you can perform parametric and functional measurements with the same BGA probe,” he adds.

According to Chestnut, “Compared to other electrical serial data physical-layer validation requirements, the bandwidth required for DDR is not very high. Oscilloscopes in this bandwidth range are widely available. However, DDR is a parallel data bus with many data lines—up to 64 data lines are not uncommon. As speeds increase, the need to perform electrical physical-layer validation on all of the data lines, and not just the assumed ‘worst case’ data lines, will increase. Separating the read and write data signals properly to perform validation requires at least two and often three other signals, leaving a conventional four-channel oscilloscope capable of validating only one data line at a time.”

Crouch adds, “Once production has hit its stride, memory tests are performed again on boards that fail the manufacturing test suite and do not qualify for release to the market. The only intent here is to determine whether the failures were a result of environmental conditions, random defects or some systematic problem in manufacturing that is affecting yields. During post-production phase of the life-cycle, or when systems have been installed in the field, memory tests are performed to troubleshoot malfunctioning systems and maintain user satisfaction. The two main goals during this phase are to identify any and all reliability concerns such as memory chips or board structures that fail earlier than expected and, second, to identify changes that might make the board design or the component selection better suited to deployment. The key aspects of post-production testing are to collect performance and reliability data in a real world environment and to resolve the collected data to any sources of faults or failures that generated it.”

Samant says, “Memory testing today requires high channel count, high-speed dynamic data and multiple control channels each with per pin parametric measurement capability. It also requires per pin programmable voltage levels and the ability to independently source or sink current. The tests also require sophisticated timing engine with sub-Hertz frequency and per pin timing resolution in the range of pico-seconds. Today’s tests also require the ability to perform inline processing between the stimulus and response and real-time performance.”

Future of high-speed testing
Today, circuit boards have complex memory architectures that are becoming harder to test due to their high speeds, high rate of data transfers on memory buses, complex protocols and loss of test points on boards where a test probe can be placed. Coverage from intrusive probe-based methods of memory test and validation, such as oscilloscopes and in-circuit test (ICT) systems, is rapidly eroding. These legacy test methods are quite challenged by today’s aggressive test goals.

Bhatia remarks, “The high-speed memory field has grown in speed by leaps and bounds. DDR1 used to operate at 200MT/s or 100MHz clock rate, whereas GDDR5, the high-speed memory for graphics, operates at 7GHz clock speeds. There is also a trend towards over-clocking the memory designs beyond their specs. For example, DDR2 is specified till 800 MT/s, whereas it is operated at 1 GT/s. DDR3 is defined till 1600 MT/s, whereas actual operating speeds cross 2 GT/s. In addition, the packaging of high-speed memories has also changed over the years with variants of DDR found in DIMM and SO-DIMM variants.”

Kenneth Johnson, director of marketing, Teledyne Lecroy, says, “Each new generation of DDR memory has doubled the clock rate and reduced the voltages used. DDR test requirements, aside from probe and oscilloscope bandwidth requirements, haven’t changed much from DDR1 to DDR3. DDR4, due to its higher transfer speeds, requires new jitter and eye diagram tests that are similar to what has been utilised for high-speed serial data electrical physical-layer validation, namely, eye diagrams of clock/data signals and extrapolated total jitter (Tj) calculation with random jitter (Rj) and deterministic jitter (Dj) separation. Other memory specifications are the adoption of smaller footprints and packaging. These trends make test and validation much more difficult as probing becomes limited to outside the memory package.”

Sai Venkat Kumar B, Country Marcomm, Tektronix says, “With the increase in data rate, memory density and power requirements, memory testing has become more challenging. We now have 120 plus tests to perform as per JDEC for DDR4. Performing these tests in conformance with the specifications presents a host of challenges that can be a complex and time-consuming task.” He adds, “One of the first obstacles to be overcome in memory validation is the issue of accessing and acquiring the necessary signals. The JEDEC standards specify that measurements should be made at the BGA ballouts of the memory component. FBGA components include an array of solder ball connections that are, for practical purposes, inaccessible. Nexus Technology’s patent pending EdgeProbe design removes mechanical clearance issues as the interposers are targeted to be the size of the memory components themselves. Embedded resistors within the inter-posers place the scope probe tip resistor extremely close to the BGA pad, providing an integrated scope probe on all signals.”

The author is a tech correspondent at EFY Bengaluru


Please enter your comment!
Please enter your name here