Results 1 to 11 of 11

Thread: The Memory FAQ

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    The Memory FAQ

    Okay! Memory and all that goes with it, is an area that causes a mass amount of confusion for teh newbies and even some experienced PC users. There is alot to consider, but most of it is pretty straight-forward.

    Here's my "perspective" on most of it. Below you will find a list of frequently asked questions, and their corresponding answers. If you feel something should be corrected or if you have a suggestion, please contact me via email @ drisler AT pcperspective DOT com or PM, as this FAQ is never truly complete.


    Cliffnotes?

    Tharr are no cliffnotes!!




    =================================
    Contents
    =================================






    Memory Basics

    • The nitty-gritty
    • Memory Access
    • What is Virtual Memory?
    • DRAM Memory Technologies
    • DDR Memory Speed
    • Processors and Bandwidth
    • DDR Dual Channel
    • What do these terms mean?
    • Installing New Memory
    • Can I mix DRAM?
    • How do I use Dual Channel?




    Bios Settings

    • Memory Timings
    • Which timings mean what?
    • What is SPD?
    • To tweak or not to tweak?
    • Ok, so I want to tweak, what do I do?
    • The Anomaly: nVIDIA's nForce2 & tRAS
    • Dealing with Memory Speeds: What is sync / async?




    Overclocking

    • How do I overclock my memory?
    • What to do with ddr voltage?
    • Do I need ram cooling?
    • How do I burn-in memory?
    • Memory Chips




    Buying Memory

    • What memory to buy?
    • Why are you recommending PC3500, when my mobo only supports PC3200?
    • How much is enough?
    • Matched or Certified Dual Channel RAM
    Last edited by Drisler; 03-13-2006 at 07:29 AM.

  2. #2
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    =================================
    Memory Basics
    =================================





    The nitty-gritty

    RAM (Random Access Memory) is a means to store data and instructions temporarily for subsequent use by your system processor (CPU). RAM is called "random access" because earlier read-write memories were sequential and did not allow data to be “randomly accessed”. RAM differs from read-only memory (ROM) in that it can be both read and written. It is considered volatile storage because unlike ROM, the contents of RAM are lost when the power is turned off. ROM is known to be non-volatile and is most commonly used to store system-level programs that we want to have available to the PC at all times, such as system BIOS programs. There are several ROM variants that can be changed or written to, under certain circumstances; these can be thought of as “mostly” read-only memory: PROM, EPROM, and EEPROM.

    Like ROM, there are also variants of RAM which hold different properties and purposes, two of which are SRAM and several flavors of DRAM. DRAM, or Dynamic RAM, is the slower of the two because it needs to be periodically refreshed or recharged thousands of times per second. If this is not done regularly, then the DRAM will lose its stored contents, even if it continues to have power supplied to it. This refreshing action is why the memory is called dynamic, meaning moving or always changing. SRAM, or Static RAM, on the other hand, does not need to be refreshed like DRAM. This gives SRAM faster access times (the time it takes to locate and read one unit of memory). As such, SRAM is far more expensive and that is why it is primarily used in relatively small quantities (normally less than 1 Mb) for cache on processors while the cheaper-to-manufacture DRAM is left for system RAM.

    Memory can be built right into a motherboard, but it is more typically attached to the motherboard in the form of a chip or module called a DIMM. A DIMM (Dual Inline Memory Module) is the name given to the circuit board that holds the memory chips, gold or tin/lead contacts and other memory devices and provides a 64 bit interface to the memory chips. You've probably seen memory listed as 32x64 or 64x64. These numbers represent the number of the chips multiplied by the capacity of each individual chip, which is measured in megabits (Mb), or one million bits. Take the result and divide it by eight to get the number of megabytes on that module. For example, 32x64 means that the module has thirty-two 64-megabit chips. Multiply 32 by 64 and you get 2048 megabits. Since a byte has 8 bits, we need to divide our result of 2048 by 8. Our result is 256 megabytes!



    Memory Access

    Processors tend to access memory in a distinct hierarchy. This hierarchy is a way of saying – the order of things; from top to bottom, fast to slow or most important to least important. Whether it comes from permanent storage (e.g. hard drive) or input (e.g. keyboard) most data goes in RAM first.

    Going from fastest to slowest, the memory hierarchy is made up of: Registers - Cache [L1; L2] - RAM [Physical and Virtual] - Input Devices

    Registers are fast data stores typically capable of holding a few bytes of data. The registers contain instructions, data, and include the program counter. Modern processors typically contain two levels of cache, known as the "level 1" and "level 2" caches. Cache memory is high speed memory that is integrated into the CPU itself (or very close it; as in older systems), and is designed to hold a copy of the contents of memory data that were recently accessed by the processor thus keeping transfer time between processor and memory at a minimum. It takes a fraction of the time, compared to normal RAM, to access cache memory. In modern systems the L2 cache is synchronous SRAM, meaning it runs at full CPU core speed. L1 cache has been synchronous since its appearance in the i486 architecture.

    Now the processor sends its request to the fastest, usually smallest and most expensive partition level of the hierarchy. If what it wants is there, it can be quickly loaded. If it isn't, the request is forwarded to the next lowest level of the hierarchy and so on. For the sake of example, let's say the CPU issues a load instruction that tells the memory subsystem to load a piece of data (in this case, a single byte) into one of its registers. First, the request goes out to the L1 cache, which is checked to see if it contains the requested data. If the L1 cache does not contain the data and therefore cannot fulfill the request--a situation called a cache miss--then the request propagates down to the L2 cache. If the L2 cache does not contain the desired byte, then the request begins the relatively long trip out to main memory. If main memory doesn't contain the data, then we're in big trouble, because then it has to be paged in from the hard disk, an act which can take a relative eternity in CPU time.

    Let's assume that the requested byte is found in main memory. Once located, the byte is copied from main memory, along with a bunch of its neighboring bytes in the form of a cache block or cache line, into the L2 and L1 caches. When the CPU requests this same byte again it will be waiting for it there in the L1 cache, a situation called a cache hit.


    What is Virtual Memory?

    Virtual Memory is common to almost all modern operating systems. With this feature, the PC’s processor creates a file on the hard disk called the swap file that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in RAM and allowing another program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores on the disk, parts of memory that are not in use at the time and transfers the original memory content back. So in effect, virtual memory is just hard drive space used to simulate more physical RAM than a system actually has.

    The problem is that the hard disk is a mechanical device, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and RAM. For you to have an idea of the magnitude the processor communicates with RAM typically at a transfer rate of 3200 MB/s (200 MHz bus), while the hard disks transfer data at rates such as 66 MB/s and 100 MB/s, depending on their technology (DMA/66 and DMA/100, respectively)

    When you realize that there’s no good substitute for the real thing and you therefore decide to add more system RAM, you’ll discover that you use your virtual memory less because you will now have more memory available to complete the tasks that were previously handled or carted off to your virtual memory.
    Last edited by Drisler; 06-25-2004 at 05:28 PM.

  3. #3
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    Memory Basics (Cont'd)

    DRAM Memory Technologies

    DRAM is available in several different technology types. At their core, each technology is quite similar to the one that it replaces or the one used on a parallel platform. The differences between the various acronyms of DRAM technologies are primarily a result of how the DRAM inside the module is connected, configured and/or addressed, in addition to any special enhancements added to the technology.

    There are three well-known technologies:

    Synchronous DRAM (SDRAM)

    An older type of memory that quickly replaced earlier types and was able to synchronize with the speed of the system clock. SDRAM started out running at 66 MHz, faster than previous technologies and was able to scale to 133 MHz (PC133) officially and unofficially up to 180 MHz. As processors grew in speed and bandwidth capability, new generations of memory such as DDR and RDRAM were required to get proper performance.

    Double Data Rate Synchronous DRAM (DDR SDRAM)

    DDR SDRAM is a lot like regular SDRAM (Single Data Rate) but its main difference is its ability to effectively double the clock frequency without increasing the actual frequency, making it substantially faster than regular SDRAM. This is achieved by transferring data not only at the rising edge of the clock cycle but also at the falling edge. A clock cycle can be represented as a square wave, with the rising edge defined as the transition from ‘0’ to ‘1’, and the falling edge as ‘1’ to ‘0’. In SDRAM, only the rising edge of the wave is used, but DDR SDRAM references both, effectively doubling the rate of data transmission. For example, with DDR SDRAM, a 100 or 133 MHz memory bus clock rate yields an effective data rate of 200 MHz or 266 MHz, respectively. DDR modules utilize a 184-pin DIMM packaging which, like SDRAM, allows for a 64 bit data path, allowing faster memory access with single modules over previous technologies. Although SDRAM and DDR share the same basic design, DDR is not backward compatible with older SDRAM motherboards and vice-versa.

    It is important to understand that while DDR doubles the available bandwidth, it generally does not improve the latency of the memory as compared to an otherwise equivalent SDRAM design. In fact the latency is slightly degraded, as there is no free lunch in the world of electronics or mechanics. So while the performance advantage offered by DDR is substantial, it does not double memory performance, and for some latency-dependant tasks does not improve application performance at all. Most applications will benefit significantly, though.

    Rambus DRAM (RDRAM)

    Developed by Rambus, Inc., RDRAM, or Rambus DRAM was a totally new DRAM technology that was aimed at processors that needed high bandwidth. RAMBUS, Inc. agreed to a development and license contract with Intel and that lead to Intel’s PC chipsets supporting RDRAM. RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Specific information on this memory technology can be found at the RAMBUS Website.

    Unfortunately for Rambus, dual channel DDR memory solutions have proved to be quite efficient at delivering about the same levels of performance as RDRAM at a much lower cost. Intel eventually dropped RDRAM support in their new products and chose to follow the DDR dance, at which point RDRAM almost completely fell off the map. Rambus, SiS, Asus and Samsung have now teamed up and are planning a new RDRAM solution (the SiS 659 chipset) providing 9.6 GB/s of bandwidth for the Pentium 4. It will be an uphill battle to get RDRAM back in the mainstream market without Intel's support.

    What’s new, pussycat?

    Enter DDR-2

    Second generation double date rate memory (DDR-2), expected to start at 400 MHz then go to 533 MHz and 667 MHz, should soon begin replacing DDR-1 (or DDR as we know it). DDR-2 seeks to increase the total memory bandwidth available to the system. This will be accomplished via increased clock frequencies in addition to streamlining the protocols used by the system to make memory reads and writes. According to the JEDEC standard, DDR-2 will have 240 pins and will offer reductions in power consumption and heat output, which are two problems that grow larger as systems carry more and faster memory. In a similar fashion to the migration from SDRAM to DDR, DDR-2 sacrifices latency. An interesting tidbit on the side is that Intel's P4 architecture, using all kinds of optimizations, will be hurt less than AMD by the high latencies of DDR II. We didn’t complain much last time, so maybe we won’t this time either? DDR-2 will likely be the dominant type of memory in desktop space for several years as DDR-1 is/was, but it won't arrive in quantity until 2005.

    QDR and XDR

    Quad Data Rate Memory (QDR DRAM) - Instead of two data samples per clock cycle, QDR sends four data samples per cycle. QDR is not a JEDEC standard, but instead has been developed as a memory timing technology by Kentron. Kentron has said that QDR technology can leverage existing DDR-1 technology. Note that QDR isn't simply 2x the speed of standard DDR. Instead, Kentron and VIA propose using a single QDR channel to achieve the performance of dual-channel DDR. (DDR-2 is still on VIA's road map)

    XDR DRAM - getting catchy? XDR DRAM stands for eXtreme Data Rate DRAM, and is the final name for Rambus's "Yellowstone" technologies which have been announced in pieces over time. XDR brings all of these formerly announced technologies under one big umbrella, which will be marketed as a high-bandwidth memory solution. XDR is effectively a hybrid of DDR and Rambus DRAM, designed to combine the best elements of both. Rambus claims that their mid-range XDR memory module is 8x faster compared to today's DDR-400. By "faster", they are referring to the module clock speed, along with how many bits can be transmitted per clock cycle. XDR modules are not in production yet, and are not scheduled to go into full-scale production until 2006.



    DDR Memory Speeds

    The speed of DDR is usually expressed in terms of its "effective data rate", which is twice its actual clock speed. PC3200 memory, or DDR400, or 400 MHz DDR, is not running at 400 MHz, it is running at 200 MHz. The fact that it accomplishes two data transfers per clock cycle gives it nearly the same bandwidth as SDRAM running at 400 MHz, but DDR400 is indeed still running at 200 MHz.

    Actual clock speed/effective transfer rate => specification

    100/200 MHz => DDR200 or PC1600
    133/266 MHz => DDR266 or PC2100
    166/333 MHz => DDR333 or PC2700
    185/370 MHz => DDR370 or PC3000
    200/400 MHz => DDR400 or PC3200
    217/433 MHz => DDR433 or PC3500
    233/466 MHz => DDR466 or PC3700
    250/500 MHz => DDR500 or PC4000
    267/533 MHz => DDR533 or PC4200
    283/566 MHz => DDR566 or PC4500

    So how do they come about those names?
    Well, the industry specifications for memory operation, features and packaging are finalized by a standardization body called JEDEC. JEDEC, the acronym, once stood for Joint Electron Device Engineering Council, but now is just called the JEDEC Solid State Technology Association.

    The naming convention specified by JEDEC is as follows:

    • Memory chips are referred to by their native speed. Example, 333 MHz DDR SDRAM memory chips are called DDR333 chips, and 400 MHz DDR SDRAM memory chips are called DDR400.
    • DDR modules are also referred to by their peak bandwidth, which is the maximum amount of data that can be delivered per second. Example, a 400 MHz DDR DIMM is called a PC3200 DIMM. To illustrate this on a 400 MHz DDR module: Each module is 64 bits wide, or 8 Bytes wide (each byte = 8 bits). To get the transfer rate, multiply the width of the module (8 Bytes) by the rated speed of the memory module (in MHz): (8 Bytes) x (400 MHz/second) = 3,200 Mbytes/second or 3.2 Gbytes/second, hence the name PC3200.


    To date, the JEDEC consortium is yet to finalize specifications for PC3500 & higher modules. PC2400 was a very short lived label applied to overclocked PC2100 memory. PC3000 was not and will not ever be an official JEDEC standard.
    Last edited by Drisler; 06-25-2004 at 05:30 PM.

  4. #4
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    Memory Basics (Cont'd)

    Processors and Bandwidth

    Reminder: Athlon64 Info to be added or maybe dedicated new topic


    The front side bus (FSB) is basically the main highway or channel between all the important functions in the motherboard that surround the processor through which information flows. The faster and wider the FSB, the more information can flow over the channel, much as a higher speed limit or wider lanes can improve the movement of cars on a highway.

    As with the FSB, a low speed limit or narrower lanes will retard the movement of cars on the highway causing a bottleneck of traffic. Intel has been able to reduce the FSB bottleneck by accomplishing four data transfers per clock cycle. This is known as quad-pumping, and has resulted in an effective FSB frequency of 800 MHz, with an underlying 200 MHz clock. AMD Athlon XPs, on the other hand, must be content with a bus that utilizes different technology, one that utilizes both the rising and falling sides of a signal. This is in essence the same double data rate technology used by memory of the same name (DDR), and results in a doubling of the FSB clock frequency. That is, a 200 MHz clock results in an effective 400 MHz FSB.

    Processors also have a FSB data width which can be thought of as the "lanes on a highway" that go in and out of the processor. When the first 8088 processor was released, it had a data bus width of 8 bits and was able to access one character at a time (8 bits = 1 character/byte) every time memory was read or written. The size in bits thus determines how many characters it can transfer at any one time. An 8-bit data bus transfers one character at a time, a 16-bit data bus transfers 2 characters at a time and a 32-bit data bus transfers 4 characters at a time. Modern processors, like the Athlon XP and Pentium 4, have a 64-bit wide data bus enabling them to transfer 8 characters at a time. Although, these processors have 64-bit data bus widths, their internal registers are only 32 bits wide and they're only capable of processing 32 bit commands and instructions while new AMD64 series of processors are capable of processing both 32 bit and 64 bit commands and instructions.

    When talking memory, bandwidth refers to how fast data is transferred once it starts and is often expressed in quantities of data per unit time. The peak bandwidth that may be transmitted by an Athlon XP or a Pentium 4 is the product of the width of the FSB and the frequency it runs at. To illustrate:

    Athlon XP “Barton” 3200+ -- 400FSB
    64(bits) * 400,000,000(Hz) = 25,600,000,000 bits/sec
    (25,600,000,000/8) / (1000*1000) = 3200 Mb/sec

    Intel Pentium 4 “C” 3.2 GHz -- 800FSB
    64(bits) * 800,000,000(Hz) = 51,200,000,000 bits/sec
    (51,200,000,000/8) / (1000*1000) = 6400 Mb/sec

    These figures are theoretical. There's a difference between peak bus bandwidth and effective memory bandwidth. Where peak bus bandwidth is the product of the bus width and bus frequency, effective bandwidth takes into consideration others factors such as addressing and delays that are necessary to perform a memory read or write. The memory could very well be capable of putting out 8 bytes on every single clock pulse for an indefinitely long time, and the CPU could likewise be capable of consuming data at this rate indefinitely. The problem is that there are turnaround times (or delays) in between when the processor places a request for data on the FSB; when the requested data is reproduced by RAM and when this requested data finally arrives for use by the CPU. So, potential peak bandwidth is very rarely, if ever, realized.



    DDR Dual Channel

    Most of today’s mainstream chipsets are using some form of dual channel to supply processors with bandwidth. Take note that the memory isn't dual channel, the platform (or chipset) is. In fact there is no such thing as dual channel memory. Rather, it is most often a memory interface composed of two (or more) normal memory modules coordinated by the chipset on the motherboard, or in the case of the AMD64 processors, coordinated by the integrated memory controller. But for the sake of simplicity, we refer to DDR dual channel architecture as dual channel memory.

    The nforce2 platform has two 64 bit memory controllers (which are independent of each other) instead of just a single controller like other chipsets. These two controllers are able to access "two channels" of memory simultaneously. The two channels, together, handle memory operations more efficiently than one module by utilizing the bandwidth of two modules (or more) combined. By combining DDR400 (PC3200) with dual memory controllers, the nForce2 could offer up to 6.4 GB/sec of bandwidth in theory. However, this extra bandwidth produced by dual channel cannot be fully utilitized by the Athlon XP and Duron family (K7) of processors. Data(bandwidth) will reach these processors no sooner than the system bus (FSB) allows them, and the processor therefore cannot derive an advantage from memory operating faster than DDR266 when operating on a 133/266Mhz FSB, DDR333 with a 166/333Mhz FSB or DDR400 at 200/400Mhz FSB even in single channel mode. Visualize a four lane highway, symbolizing your Dual Channel configuration. As you go along the highway you come up to a bridge that is only 2 lanes wide. That bridge is the restriction posed by the dual-pumped AMD FSB. Only two lanes of traffic may pass through the bridge at any one time. That's the way it is, with the K7 processors and Dual Channel chipsets.

    In case you're wondering, the K in K7 stands for Kryptonite later changed to Krypton to avoid copyright infringement. Yes, that very same fictional element from comic books that could bring the otherwise all-powerful Superman to his knees. Speaking of which, Intel's P4 architecture is, in contrast, designed to exploit the increased bandwidth afforded by dual channel memory architectures. The 64-bit Quad Pumped Bus of the modern Pentium 4 CPU working at 800MHz, in theory, requires 6.4GB/s of bandwidth. This is the exact match of the bandwidth produced by the Intel i875 (Canterwood) and i865 (Springdale) chipset families. The quad pumped P4 FSB seemed like drastic overkill in the days of single channel SDR memory, but is paying handsome dividends in today's climate of dual channel DDR memory subsystems. This is one lasting and productive legacy of Intel's RDRAM efforts. As implemented on the P4 RDRAM was also dual channel architecture, and mandated the quad-pumped FSB for its extra bandwidth to be exploited. This factor continues to serve the P4 well in the dual channel DDR era we are currently in, and allows P4's greater memory performance than all other PC platforms, save the new AMD Athlon64 FX with all its new bells and whistles.

    The Athlon 64 FX processor has a fully integrated DDR Dual Channel memory controller providing a 128-bit wide path to memory and therefore eliminating the need for a Dual Channel interface on the motherboard which traditionally was always located in the Northbridge. The old term front-side bus has always represented the speed at which the processor moves memory traffic and other data traffic to and from the chipset. Since the AMD64 processors has the memory controller located on the processor die, that memory subsystem traffic no longer has to go through the chipset for CPU-to-memory transfer. Therefore, the old term "front-side bus" does no good as it is not applicable anymore. With AMD64 processors, the CPU and memory controller interface with each other at full CPU core frequency. The speed at which the processor and chipset communicate is now dependent on the chipset's HyperTransport spec, running at speeds of up to 1600 MHz. Although the P4 (800fsb variety) and the A64 FX 940 pins, both share the same theoretical peak memory bandwidth of 6.4GB/sec, the Athlon FX realizes significantly more throughput due mainly to it’s integrated memory controller which drastically reduces latency. Even so, it still suffers from the required use of registered modules which are slower than regular modules. The upcoming Athlon 64 / A64 FX processors designed for Socket 939 will be free from this major drawback and will also feature Dual Channel memory controllers. One negative, though, of having the memory controller integrated into the processor is that to support emerging memory technologies, like DDR-2 for example, the controller has to be redesigned and the processor needs to be replaced.
    Last edited by Drisler; 06-30-2004 at 10:54 PM.

  5. #5
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    Memory Basics (Cont'd)

    What do these terms mean?


    Parity
    Parity is form of error checking. Non-parity is "regular" memory -- it contains exactly one bit of memory for every bit of data to be stored. 8 bits are used to store each byte of data. Parity memory adds an extra single bit for every eight bits of data, used only for error detection. So with parity modules, 9 bits of data are used to store each byte. This extra chip detects if data was correctly read or written, however, it will not correct any errors that may have occurred.

    ECC
    This stands for error correcting circuits, error correcting code, or error correction code. These modules go beyond simple parity checking. They also have an extra chip (or two, depending on how much chips total the module has) that not only detects errors but also corrects the error (depending on type) on the fly. When this correction takes place, the computer will continue without a hiccup; it will have no idea that anything even happened. However, if you have a corrected error, it is useful to know this; a pattern of errors can indicate a hardware problem that needs to be addressed. Chipsets allowing ECC normally include a way to report corrected errors to the operating system, but it is up to the operating system to support this.

    Registered & Unbuffered
    Registered modules contain a 'register' that helps to ensure data is handled properly. Registered modules are therefore slower than unbuffered modules. They are generally used in mission critical machines and machines that require large amounts of memory. The Opteron series of AMD processors uses registered DDR. The Athlon64 FX (940 pins) inherits its architecture from the Opteron 100 series, thus it too requires registered modules to function.

    Registered memory must be supported by the motherboard and cannot be mixed with "Unbuffered" modules. Buffered memory is basically the same as registered memory, but the term is used for older types of memory. Unbuffered or standard memory modules do not have a register. They are cheaper and are the popular choice for home computers.



    Installing New Memory

    The actual installation part is easiest. Some tips while installing memory:

    • Open your computer case and locate the memory sockets on your motherboard. You may need to unplug cables and peripherals, and re-install them afterward.
    • Always handle the module by the edges.
    • Properly ground yourself before taking your RAM in hand; simply grab hold of an unpainted metal surface, such as the frame inside your PC. But if you tend to get zapped when you touch metal objects in the vicinity of your PC, consider more drastic measures before opening that ram packaging and get yourself a grounding wrist strap. Antistatic straps usually cost less than $10 and should be available where memory products are sold. If not your local electronic part supply house will have them. Electro-Static Discharge (ESD) is a frequent cause of damage to the memory module. ESD is the result of handling the module without first properly grounding yourself and thereby dissipating static electricity from your body or clothing. If ESD damages memory, problems may not show up immediately and may be difficult to diagnose.
    • DIMMs usually slide straight down into the slot and lock into place, when a little pressure is applied to each side of module itself and this is secured by the ejector tabs/clips on the ends of the slots which automatically snap into a locked position. Note how the module is keyed to the memory socket. This ensures the module can be plugged into the slot one way only -- the right way. You can probably determine how your PC's modules snap in by looking at the already installed memory. Repeat this procedure for any additional modules are installing. If you're removing RAM, the process is reversed--unlock the module by pushing out the clips, then lifting it up.
    • With the memory in place, turn on your PC. You should see evidence of your newly installed memory as the system does its power-on test (POST). If you don't or if a memory error appears or is heard then remove and re-seat all the memory modules--the old and the new. If this doesn't solve the problem, remove the new modules and try again.
    • Do not remove any stickers from the modules. Removing these stickers will likely void the warranty. The information present on these stickers maybe required for warranty replacement and information, as well as determining which module you have and its characteristics. These stickers will not have any affect on performance nor will they be affected by the heat inside the system.



    Which slots to use?

    • If you're using a single module, it's best practice to use the first slot. If using two or more modules in a non-dual channel motherboard, populate the first slot and use any other slots you wish. Q: I've had my single module installed in slot 2 for the last few months now, should I change it? No, it's also best practice to keep on using the slot(s) you're been using before. If you replace RAM, then insert the new modules, in the same slots the older ones were in before.
    • You may find the system overclocks better with the ram in a different slot. It is very hard to predict when this effect occurs, as well as which one might work best. In the overclocking game he who tries the most things wins, and if you are running an overclocked configuration that is asking a lot of the ram it is a good idea to try all available slots to make sure the one you are using yields the best results.
    • If you're using two or more modules of unequal size, you will get the best performance if you put the largest module(s) (in megabytes) in the lowest-numbered slot(s). For example, if your system currently has 256MB of memory and you want to add 512MB, it would be best to put the 512MB module into slot 0 and the 256MB module into slot 1.




    Can I mix DRAM?

    Mixing memory speeds refers to the use of more than one DRAM; each of which holds unlike properties such as speed, arrangement, size and even SPD programming. It doesn't always work as expected and so, it is generally preferable to avoid doing this. Often, many folks who upgrade machines, particularly older ones, find themselves in a situation where they may not be able to find additional memory that is identical to that which is already in the machine. The risks of running into problems are greatly increased if the modules used have significantly different properties. A system will only run at speed of the slowest module, if you mix different speed modules. As you will see, some systems automatically detect the properties of memory modules being used, and set the system timing and other settings accordingly. They usually look at the speed of the memory in the first bank when detecting these settings. So if you use two or more dissimilar modules, it is advisable to place to the slowest module in the first slot.

    As well, if the goal is overclocking, often the best results are obtained with perfectly matched modules. Additionally dual channel architectures generally work very poorly with mis-matched modules. There is no guarantee a mismatch won't work for one reason or another, and it won't really hurt anything to try (assuming your data is backed up), but to maximize the chances of success, modules should match.



    How do I use Dual Channel?

    Dual Channel requires at least two modules for operation. It is recommended that the modules you use be of the same size, speed, arrangement etc. Dual Channel is optional on the original nforce2 motherboards and nforce2 ultra400. You can also choose to run in single channel mode on these motherboards. Nforce2 400 boards are singe-channel only. Most dual channel capable nforce2 motherboards come with three slots. On these motherboards the first memory controller controls only the first slot (or the slot by itself), while the second memory controller controls the last two slots (which are usually closer together). Name them slots 1, 2 & 3 respectively. To implement Dual Channel, it is necessary to occupy the slot 1 (channel 0) and either one of the two slots that are closer together, slots 2 or 3 (channel 1). The entire config would be running in 128 bit mode.

    In addition, on nForce2 motherboards, you may use three modules in Dual Channel Mode, by filling the third unoccupied slot. With three sticks, slots 1 remains as channel 0 while slot 2&3 become channel 1. To maintain 128-bit mode, with all three slots filled, each channel must have an equal amount of memory. For example, slots 1 should be filled with a 512 Mb module, while slots 2 & 3 are populated 256 Mb modules. If you were to use three modules of the same size, then only the first two modules would be running in 128 bit Dual Channel Mode. Example, using 3x 256 Mb modules will have the first 512 Mb running in 128 bit Dual Channel mode, while the remaining 256 Mb will be in 64-bit Single Channel mode.

    Intel dual-channel systems are different. They have either two or four slots, and to run dual channel mode must have either one or two pairs of (hopefully) matching modules. Running three modules on a P4 system will force it to run in single channel mode, and is therefore to be avoided.

    Consult your motherboard manual for instruction on exactly which slots to use.
    Last edited by Drisler; 06-25-2004 at 05:35 PM.

  6. #6
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    =================================
    Bios Settings
    =================================




    Memory timings

    Memory performance is not entirely determined by bandwidth or MHz, but also the speeds at which it responds to a command or the times it must wait before it can start or finish the processes of reading or writing data. These are memory latencies or reaction times (timings). Memory timings control the way your memory is accessed and can be either a contributing factor to better or worse 'real-world' performance of your system.

    Internally DRAM has a huge array of cells that contain data. (If you've ever used Microsoft's Excel, try and picture it that way) A pair of row and column addresses can uniquely address each cell in the DRAM. DRAM communicates with a memory controller through two main groups of signals: Control-Address signals and Data signals. These signals are sent to the RAM in order for it to read/write data, address and control. The address is of course where the data is located on the memory banks, and the control signals are various commands needed to read or write. There are delays before a control signal can be executed or finish and this is where we get memory timings.

    The standard format for memory timings are most often expressed as a string of four numbers, separated by dashes, from left to right or vice-versa like this 2-2-2-5 [CAS-tRCD-tRP-tRAS] . These values represent how many clock cycles long each delay is but are not expressed in the order in which they occur. Different bioses will display them differently and there maybe additional options (timings) available.



    Which timings mean what?

    In most motherboards, numerous settings can be found to optimize your memory. These settings are often found the Advanced Chipset section of the popular award bioses. In certain instances, the settings maybe placed in odd locations and even given unfamiliar names, so please consult your motherboard manual for specific information. Below are common latency options:
    • Command rate - is the delay (in clock cycles) between when chip select is asserted (i.e. the RAM is selected) and commands (i.e. Activate Row) can be issued to the RAM. Typical values are 1T (one clock cycle) and 2T (two clock cycles).
    • CAS (Column Address Strobe or Column Address Select) - is the number of clock cycles (or Ticks, denoted with T) between the issuance of the READ command and when the data arrives at the data bus. Memory can be visualized as a table of cell locations and the CAS delay is invoked every time the column changes, which is more often than row changing.
    • tRP (RAS Precharge Delay) - is the speed or length of time that it takes DRAM to terminate one row access and start another. In simpler terms, it means switching memory banks.
    • tRCD (RAS (Row Access Strobe) to CAS delay) - As it says it's the time between RAS and CAS access, ie. the delay between when a memory bank is activated to when a read/write command is sent to that bank. Picture an Excel spreadsheet with a number across the top and along the left side. They numbers down the left side represent the Rows and the numbers across the top represent the Columns. The time it would take you, for example, to move down to Row 20 and across to Column 20 is RAS to CAS.
    • tRAS (Active to Precharge or Active Precharge Delay) - controls the length of the delay between the activation and precharge commands ---- basically how long after activation can the access cycle be started again. This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested.

    These timings or delays occur in a particular order. When a Row of memory is activated to be read by the memory controller, there is a delay before the data on that Row is ready to be accessed, this is known as tRCD (RAS to CAS, or Row Address Strobe to Column Access Strobe delay). Once the contents of the row have been activated, a read command is sent, again by the memory controller, and the delay before it starts actually reading is the CAS (Column Access Strobe) latency. When reading is complete, the Row of data must be de-activated, which requires another delay, known as tRP (RAS Precharge), before another Row can be activated. The final value is tRAS, which occurs whenever the controller has to address different rows in a RAM chip. Once a row is activated, it cannot be de-activated until the delay of tRAS is over.



    What is SPD?

    SPD (Serial Presence Detect) is a feature available on all DDR modules. This feature solves compatibility problems by making it easier for the BIOS to properly configure the system to optimize your memory. The SPD device is an EEPROM (Electrically Erasable Programmable Read Only Memory) chip, located on the memory module itself that stores information about the DIMM modules' size, timings, speed, data width, voltage, and other parameters. If you configure your memory by SPD, the bios will read those parameters during the POST routine (bootup) and will automatically adjust values in the BIOS according preset module manufacturer specifications.

    There is one caveat though. At times the SPD contents are not read correctly by the bios. With certain combinations of motherboard, bios, and memory setting SPD or Auto may result in the bios selecting full-fast timings (lowest possible numbers), or at times full-slow timings (highest possible numbers). This is often the culprit in situations where it appears that a particular memory module is not compatible with a given board. Often in these cases the SPD contents are not being read correctly and the bios is using faster memory timings than the module or system as a whole can boot with. In cases like these try replacing the module with another, setting the bios to allow manual timings, and setting those timings to safer (higher) values will allow the combination to work.



    To tweak or not to tweak?

    In order to really maximize performance from your memory, you'll need to gain access to your system's BIOS. There is usually a Master Memory setting, often rightly called Memory Timing or Interface, which gives usually gives you the choice to set your memory timings by SPD or Auto, preset Optimal and Aggressive timings (e.g. turbo and ultra), and lastly an Expert or Manual setting that will enable you to manipulate individual memory timing settings to your liking.

    Are the gains of the perfect, hand-tweaked memory timing settings worth it over the automatic settings? If you're just looking to run at stock speeds and want absolute stability, then the answer to that question would probably be no. The relevance would be nominal at best and you would be better off going by SPD or Auto. However, if your setup is up on the cutting edge of technology or you’re pushing performance to the limit as do some overclockers, or gamers or tweakers, it may have great relevance.
    Last edited by Drisler; 06-25-2004 at 05:37 PM.

  7. #7
    Joined
    Mar 2003
    Age
    33
    Posts
    13,673

    Bios Settings (Cont'd)

    Ok so I want to tweak, what do I do?

    Now for the kewl stuff!!!
    Here are general guidelines to follow while "tweaking". Some of these points can go much deeper, then stated. If you have any Qs or if anything isn't clear, you can PM me.


    • The first order of business, when tweaking your memory, is to deactivate the automatic RAM configuration -- SPD or Auto. With SPD enabled, the SPD chip on the memory module is read to obtain information about the timings, voltage and clock speed and those settings are adjusted accordingly. These settings are, however, very conservative to ensure stable operation on as many systems as possible. With a manual configuration, you can customize these settings for your own system to your liking.

      As with CPU/video card overclocking, adjusting the memory timings should be done methodically and with ample time to test each adjustment.

      Testing each adjustment WILL take alot of your time. If you've been reading this, then it appears you've got alot of it! ....So the one and only way to know if your memory is capable of your desired timings is to use stress testing programs, benchmarks or even your favorite game. Three popular programs used for this are memtest86, Prime95 and 3dmark2001. I recommend at least 8 hours of testing before concluding that your RAM is stable at your FINAL timings. BEFORE getting to your final timings, you would have had previously made smaller adjustments from your "stock" timings. You may carry out testing after each of these adjustment, lasting for short periods. More experienced users, would just skip this and cut the chance.

    • Lower values (or figures) = better performance, but lower overclockability and possibly diminished stability. Higher values = lesser performance, but increased overclockability and more stability.

      As a general rule, a lower value (or timing) will result in improved performance. After all, if it takes fewer cycles to complete an operation, then it can fit more operations within X amount of time. However, this comes at a cost, and that is stability. It is similar to wireless networking with short and long preambles. A long preamble might be slower, but in a heavy network environment it is much more reliable than short preamble because there is more certainty a packet is for your NIC. The same is for memory - the more cycles used, in general, the more stable the performance. This is inherently true for all of them because to access precisely the right part of the memory, you have to be accurate, and the more time to do a calculation will make it more accurate in this instance.

      The memory timings can also play a role in how far the memory will go, in keeping with the FSB. Lower timings may hinder how fast the memory can run, while higher timings allow for more memory speed. So which is better, lower timings or higher memory speeds? Overall data throughput depends on bandwidth and latencies. Peak bandwidth is important for certain applications that employ mostly streaming memory transfers. In these applications, the memory will burst the data, many characters or bytes after each other. Only the very first character will have a latency of maybe several cycles, but all other characters after it will be delivered one after another. Other applications with more random accesses, like most games, will get more mileage out of lower latency timings. So if you have to choose, weigh the importance of higher memory clocks against lower latency timings and decide which is most important for your application. (See "Buying Memory" for more)

      If you are not planning on overclocking the clock speed of your RAM or if you have fast RAM rated at speeds above that of your current FSB, it may be possible to just lower the timings for a performance gain in certain applications that require most frequent accesses to system memory like, for instance, games. Memory timings can vary depending on the performance of RAM chips used by the module maker. One might think that, by buying Corsair's XMS PC4400 rated at 3-4-4-8(for example), they'll be able to lower latencies down to 2-2-2-6 if running at just 200MHz (or PC3200 speeds). It doesn't quite work that way. Not all memory modules will exhibit the ability to use certain timings without producing errors, instability or even worse.

    • Most typical values for memory timings are 2 and 4. You might ask: Why can't we use 1 or even 0 values for memory timings? JEDEC specifies that it's not possible for current DRAM technology to operate as it should under such conditions. Depending on motherboard, you might be able to squeeze '1' on certain timings, but will very likely result in memory errors and instability. And even if it doesn't, it is unlikely to result in a performance gain. tRCD & tRP are usually equal numbers between 2 and 3. CAS Latency should be either 2.0 or 2.5. Many systems, running performance PC3200/PC3500 memory fail to boot with a CAS3 setting.

      CAS is not most critical of the various timings, unlike what is taught by many and what RAM sells try to market. In general, the importance of CAS when placed against tRP and tRCD is nominal. Reducing CAS has a relatively minor effect on memory performance, while lower tRP & tRCD values result in a much more substantial gain. In other words if you had to choose, 3-3-2.5 would be better than 4-4-2.0 (tRCD-tRP-CAS). The value of tRCD most often accounts for the biggest hit in performance if increased, followed by tRP, then CAS. So if you need to loosen RAM timings in hopes of achieving a higher clock, it is recommended and accepted that you increase the value of CAS first, then tRP, and then finally tRCD.

    • tRAS is unique, in that lowering it can lead to problems and lesser performance. As said before, it is a delay after row activation until the access cycle be started again. If the value of tRAS is too high, the row will be unnecessarily delayed from starting for another cycle. However, if it is set too low, there may not be enough time to complete the cycle. When that happens, there will be loss or corruption of data. This whitepaper from the RAM manufacturer Mushkin outlines how tRAS should be a sum of tRCD, CAS, and 2. For example, if you are using a tRCD of 2 and a CAS of 2 on your RAM, then you should set tRAS to 6. At values lower than that, theory would dictate lesser performance as well as catastrophic consequences for data integrity including hard drive addressing schemes - truncation, data corruption, etc - as a cycle or process would be ended before it's done. How is it possible for memory timings to affect my hard drive? When the system is shut down or a program is closed, physical ram data that becomes corrupted may be written back to the hard drive and that’s where the consequences for the hard drive come in. Also let’s not forget when physical ram data is translated by the operating system to virtual memory space located on the hard drive.

      While it's important to consider the advice of experts like Mushkin, your own testing is still valuable. Systems – both AMD & Intel alike, can indeed operate with stability with 2-2-2-5 timings, and even exhibit a performance gain as compared to the theoretically mandated 2-2-2-6 configuration. The most important thing in any endeavor is to keep an open mind, and don't spare the effort. Once you've tried both approaches extensively it will be clear to you which is superior for your particular combination of components.

    • Unlike CPU overclocking or video card tweaking, adjusting memory timings offers very little physical risk to your system, other than the possibility of a windows failure to load or a program failure while testing.




    The Anomaly: nVIDIA’s nForce2 and tRAS

    An anomaly can be described as something that’s difficult to classify; a deviation from the norm or common form. This is exactly the situation with tRAS (Active to Precharge) and nVIDIA’s nforce2 chipset. As said before, not sparing the effort is what has lead to the initial discovery of this anomaly many months ago. It’s pretty well known by now, in a nutshell, a higher tRAS (i.e. higher than, say, the Mushkin mandated sum of CAS+tRCD+2) on nforce2 motherboards consistently shows slightly better results in several benchmarks and programs. In most cases, 11 seems to be the magic number. Other chipsets do not display this “deviation from the norm”, so what makes the nforce2 different?

    'TheOtherDude' has given a possible explanation for this anomaly in this thread.

    “Unlike most modern chipsets, the Nforce2 doesn't seem to make internal adjustments when you change the tRAS setting in the BIOS. These "internal" (not really sure if that’s the right word) settings seem to include Bank Interleave, Burst Rate and maybe even Auto-precharge. For optimal performance, tRAS (as measured in clock cycles) should equal the sum of burst length, plus the finite time it takes the RAM to conduct a number of clock independent operations involved with closing a bank (~40 ns) minus one clock if Auto-precharge is enabled (this factor can be slightly effected by CAS, but should not play a role in optimal tRAS). To complicate things even more, one bank cannot precharge a row while the other specifies a column. This brings tRCD into the mix.

    Higher isn't always better, but the reason everything is so weird with tRAS and the Nforce2 is simply because the chipset doesn't make the internal optimizations to accommodate your inputted tRAS value like most other chipsets.”
    Last edited by Drisler; 06-25-2004 at 05:38 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •