Random Access Memory

Everything About Fiction You Never Wanted to Know.


  • Main
  • Wikipedia
  • All Subpages
  • Create New
    /wiki/Random Access Memorywork

    While the CPU is a heart of a computer system, moving the data to and from and processing it as required, this data still needs to be held somewhere. That's where RAM comes into play. RAM stands for Random Access Memory—any place in the memory can be written at any time without having to wait. This contrasts with sequential access memory, where you have to rewind or fast-forward a tape or wait for a certain time to access data.

    Most people simply call RAM "memory" now. It comprises the main operating part of the computer's storage hierarchy. Just as clock speed is misunderstood to be the only measure of Central Processing Unit power, capacity is thought to be the only important measurement in when it comes to Random Access Memory.

    Memory is not all about capacity. Unless a system or game is idle, memory will not stay with the same data indefinitely. It's constantly moving data on and off the memory chips to handle the ever changing data. In other words, capacity is important, but so is how fast it can move data on and off the chip. In situations where the machine has to multitask (such as PCs, Play Station 3, and Xbox 360), capacity can increase performance, but the returns diminish quickly (i.e., if you double the RAM, it might really boost performance, but if you double it again, it won't do much). More available RAM is helpful for storing more data that you wish to use immediately. It prevents more frequent access to the slower hard drive/DVD/Blu-Ray disk, which to a processor takes an eternity.

    Like a CPU, memory speed is measured in Clock Speed in between latency. And latency tends to affect memory more than processors. This is because one also has to take into account the speed of the bus, the shared electrical pathway between components. With RAM embedded on the CPU die, there is a very short distance and a dedicated pathway that the bits can travel across, while RAM placed in other areas requires the bits to travel the shared bus, which may have other devices using it. This means factors such as the bus speed and the number of other devices requiring the bus can contribute to data-transfer latency. Even the physical length of the bus can become a non-trivial factor in how fast data can be moved in and out of RAM.

    In addition to clock speed, latency, and capacity, memory is also measured in bandwidth. Bandwidth is the amount of data that flows between the processors and the memory. Bandwidth tends to have a much higher maximum capacity than the memory capacity, typically 500 to 1000 times greater. This is unlikely to ever be all used up (why bandwidth size is called a "theoretical maximum"; it could reach that maximum, in theory). It's just to ensure the smoothest running between the memory and processors. How these measurements compare depends on the type or memory.

    There are several ways to classify the RAM types, but the two most used are the technological classification (that is, by the technology underlining each type), and usage classification, breaking the types by their purpose. Here they are:

    By technology:

    Historical

    Well before modern memory types became available, early machines still needed to store their data—even the ENIAC, which didn't even had storable program (it was controlled by sequentially wiring all the modules together) had some storage for data. Initially this was the very straightforward and obvious solution—static memory, that is, keeping the data in the electronic circuits named triggers, or flip-flops, that could remain in one of the two stable states. But because word size in those early machines was somewhere between 20 to 40 bits, and one flip-flop can hold at most two bits of information, while requiring at least four electronic valves at the time when the only available type of valve was a huge and fragile vacuum tube, having more than couple dozens of such "registers" was simply impractical.

    That's where everything got interesting. To hold bigger amounts of data several technologies were used, some of them being decidedly odd. Like storing the data as an acoustic waves (yes, bursts of sound) in mercury-filled tubes, or magnetic pulses on a rotating drum. Technically, these types of memory weren't even random-access, they were sequential, but they simulated RAM relatively well. Then there was a technology where the bits were stored as dots on the phosphor surface of a CRT—which had the advantage that the programmer could literally see the bits stored, which often helped in the software debugging.

    But most of these technologies were not terribly practical; they were expensive, slow and (especially in the case of mercury delay lines) environmentally dangerous. Dr. An Wang (then of IBM) proposed a solution which took the industry by storm -- magnetic core memory.

    Close up of core memory.

    Core memory consisted of thousands of tiny (1–2 mm wide) donut-shaped magnetic cores, set on a grid of metal wires. By manipulating the voltages put on these wires, the state of any individual core could be read or written. Since there were no moving parts, as with a delay line or a drum, access time was much quicker. Core memory was also substantially denser than either delay-line or CRT memory, and used less power as well. It also held its content when the power was off, which was widely used at the time.

    In addition to their compact size (for example, a unit holding 1K, a rather generous amount of the time, was a flat frame only 20x20x1 cm square), they were also rather cheap. Cores had to be assembled by hand, even in their last days (early attempts to mechanize the process failed and were abandoned once semiconductor RAM appeared), so most manufacturers used the cheap labor of East Asian seamstresses and embroiderers (who had been made redundant by the widespread adoption of sewing machines) thus making it affordable. Most Mainframes and Minicomputers used core memory, and it was ingrained into the minds of the people who worked on them to such extent that even now you can meet a situation when the word "core" is used as a synonym for RAM, even though computers in general haven't used it since The Seventies.

    Solid State RAM

    Solid state RAM was the technology that finally ended the core era. It was an outgrowth of the attempts to miniaturize electronic circuits. Transistors had replaced vacuum tubes in early computers relatively quickly, due to their smaller size, reliability (they had no glass envelopes to break or filaments to burn out) and much lower power consumption. However, even the smallest transistors at the time were about the size of a small pencil eraser, and it took hundreds of them to make a working computer, so computers still remained bulky and expensive. In The Fifties two engineers independently figured how to put several transistors and other electronic components on the same piece of semiconductor, and thus the integrated circuit was born. The sizes of the electronic circuits started to shrink almost overnight, and one of the first applications of them in the computer industry was for RAM.

    • Static RAM, as mentioned above, is a type of memory where each bit is represented by a state of a certain type of circuit called a flip-flop. With one IC replacing several transistors and their attendant circuitry, static memory became much more affordable, and started appearing in larger and larger amounts. The main advantage of static memory is that it's very quick—basically, its speed is only limited by the speed of the physical processes inside the transistor, and these are extremely rapid. It also requires power only to write something, and takes only a token amount when reading or storing, so it dissipates almost no heat. But still, each bit of static memory takes two to four transistors to store, so it remains relatively bulky and expensive.
    • Dynamic RAM, on the other hand, uses capacitors to store bits (it requires generally one capacitor and, maybe, one diode to store one bit, which takes much less silicon space), so it's much more compact and thus cheap. Unfortunately, capacitors tend to lose charge over time, so they have to be periodically recharged, usually by reading the memory and writing the same data again, called "memory refresh". This process takes either the attention of the CPU, or the additional support circuitry on the memory chip itself, and, to add insult to injury, the need to constantly refresh the memory contents means that when the power gets turned off, all memory gets completely erased—core, being magnetic, was completely non-volatile, and static RAM required so little power that it could be kept alive with a simple lithium watch battery. Still, the enormous density that DRAM offers makes it the most affordable and used type of the memory ever.
    • Magnetic RAM is basically a return to core on a new level, where each ferrite donut of the old-style core is replaced by a ferrite grain in an IC. It has the density advantage of a DRAM (there is some penalty, but it's not that big), its speed is closer to static RAM, it's completely non-volatile and it can be written as fast at it is read (not to mention as many times as needed), negating most of the Flash Memory drawbacks. Unfortunately, due to Flash selling like ice-cream on a hot day, few producers could spare their fabs to produce it, and it requires significant motherboard redesign to boot. This and several technological bottlenecks seem to lock it in the Development Hell for the time.
      • On a side note, there's also an issue with security with non-volatile memory. For example, if a computer doing encryption had non-volatile memory, a clever hacker could turn off the machine, take out the memory, and do a dump without fear of losing the contents. For the same thing to happen with DRAM (it actually loses memory over time, not instantaneously), the person would have to dump the RAM chip in liquid nitrogen to slow the discharge process to a crawl.

    DDR RAM

    The DDR stands for "Double Data Rate". Typically, RAM processes the data once per clock cycle, while this kind of memory does it twice. It does come at the cost of slightly slower latency, but doubling the clock speed is a huge advantage for gaming. The Xbox used DDR memory. The GDDR3 variant has lower power consumption, even faster clock speeds, and allows flushing the memory to prevent dead data from clogging the chip. These advantages got that kind of memory used on the Xbox 360, the PlayStation 3, and the Wii.

    Rambus DRAM

    "Rambus Dynamic RAM" focuses on slightly higher bandwidth, and much higher clock speed. It does come at the cost of higher power consumption, higher capacity, and slower latency. The last one has been reduced in later versions, to the point where the XDR variant on the PS3 has latency no slower than DDR memory.

    This meant the earlier versions were not that good for graphics. It didn't hurt the PlayStation 2, which used it for regular memory, not for video memory, but the Nintendo 64 did use it for video memory. This was one of many bottlenecks that kept the system from performing as well as its graphics looked.

    Rambus DRAM is evidently good for video playback, hence why the PS2 and PS3 are considered such good movie players for their times. The PlayStation Portable doesn't use that kind of memory, given that the increased power consumption would drain the battery. This has meant that UMD movie playback on TVs is notably washed out. It was briefly used in the early 2000s for home PCs; however, although it was indeed blazing fast, upgrading it was way too expensive.

    EDRAM

    Typically, a Graphics Processing Unit does not have cache. Video memory fills that role. But "Embedded Dynamic RAM" is pretty damn close. It's stuck right next to the processor instead of inside of it. The gain is larger size (but still much smaller than standard memory), and its clock speed still matches the processor. The tradeoffs are smaller bandwidth (but still about 10 to 100 times more than standard memory), and slower latency (but still much, much faster than standard memory), and increased manufacturing cost.

    The greatly increased speed means that the video memory can handle about the same amount of data than standard memory of a much larger size. The PS2, Game Cube, Wii, and 360 all use this kind of memory. People thought the size was too small, but once they caught on, the systems ran just fine with that memory. Future systems with the Cell Processor plan on using this kind of memory, as it's practically fast enough to keep up with that processor.

    IT-SRAM

    Probably a good middle ground between standard memory and embedded memory. It has average clock speed and bandwidth, but also average capacity, combined with a latency only slightly slower than EDRAM. The Game Cube and the Wii use this kind of memory (yes the Wii uses this memory, EDRAM, and GDDR3 all at once).

    By usage:

    Registers

    This is the fastest type of memory available. All processors typically have registers, which is normally very fast static RAM. These store about a word of data, the number of bits the processor can handle at once. The most important one is the instruction counter, as it holds where the next instruction is. Another one that's always found is the status word. Others may be present and some can be used by the user or not. Interestingly enough, complex instruction set processors have relatively few registers. Reduced instruction set processors easily have over 100. For example, most modern x86 CPUs, being a very advanced RISC machines internally, use automatic renaming of their ~128 internal registers to simulate several sets of the 14 traditional x86 registers, thus allowing several CISC instructions to be run at once.

    Cache

    Just in case you are wondering, it's pronounced "cash", not "ca-shay". This is memory stuck right in the CPU. Why? Well often the CPU needs to store data for certain processing. It doesn't need to store a lot, but it needs to store it in memory as fast as possible. Cache fills that purpose. By sticking it right inside the processor, the latency is no slower than the processor's, and the clock speed matches. It does mean that the cache can only be so large. The 360 has the most cache of any home consoles, and it's just 1 megabyte in size. But it's the speed that counts, since it's designed to keep up with the CPU.