Random Access Memory: Difference between revisions

Reduced image size for readability
m (clean up)
(Reduced image size for readability)
 
(6 intermediate revisions by 4 users not shown)
Line 1:
{{tropeUseful Notes}}
[[File:DDR 4 RAM SO-DIMM 16GB by Micron-top back PNr°0841.jpg|thumb]]
While the [[CPU]] is a heart of a computer system, moving the data to and from and processing it as required, this data still needs to be held ''somewhere''. That's where RAM comes into play. RAM stands for '''R'''andom '''A'''ccess '''M'''emory—any place in the memory can be written at any time without having to wait. This contrasts with ''sequential access memory'', where you have to rewind or fast-forward a tape or wait for a certain time to access data.
 
Most people simply call RAM "memory" now. It comprises the main operating part of the computer's [[Memory Hierarchy|storage hierarchy]]. Just as clock speed is misunderstood to be the only measure of [[Central Processing Unit]] power, capacity is thought to be the only important measurement in when it comes to Random Access Memory.
 
Memory is not all about capacity. Unless a system or game is idle, memory will not stay with the same data indefinitely. It's constantly moving data on and off the memory chips to handle the ever changing data. In other words, capacity is important, but so is how fast it can move data on and off the chip. In situations where the machine has to multitask (such as PCs, [[PlayStationPlay Station 3]], and [[Xbox 360]]), capacity can increase performance, but the returns diminish quickly (i.e., if you double the RAM, it might really boost performance, but if you double it again, it won't do much). More available RAM is helpful for storing more data that you wish to use immediately. It prevents more frequent access to the slower hard drive/DVD/Blu-Ray disk, which to a processor takes an eternity.
 
Like a CPU, memory speed is measured in [[Clock Speed]] in between latency. And latency tends to affect memory more than processors. This is because one also has to take into account the speed of the bus, the shared electrical pathway between components. With RAM embedded on the CPU die, there is a very short distance and a dedicated pathway that the bits can travel across, while RAM placed in other areas requires the bits to travel the shared bus, which may have other devices using it. This means factors such as the bus speed and the number of other devices requiring the bus can contribute to data-transfer latency. Even the physical length of the bus can become a non-trivial factor in how fast data can be moved in and out of RAM.
Line 12 ⟶ 13:
There are several ways to classify the RAM types, but the two most used are the technological classification (that is, by the technology underlining each type), and usage classification, breaking the types by their purpose. Here they are:
 
'''==By technology:'''==
 
=== Historical ===
 
Well before modern memory types became available, early machines still needed to store their data—even the ENIAC, which didn't even had storable program (it was controlled by sequentially wiring all the modules together) had some storage for data. Initially this was the very straightforward and obvious solution—static memory, that is, keeping the data in the electronic circuits named triggers, or flip-flops, that could remain in one of the two stable states. But because word size in those early machines was somewhere between 20 to 40 bits, and one flip-flop can hold at most two bits of information, while requiring at least four electronic valves at the time when the only available type of valve was a huge and fragile vacuum tube, having more than couple dozens of such "registers" was simply impractical.
Line 21 ⟶ 22:
 
But most of these technologies were not terribly practical; they were expensive, slow and (especially in the case of mercury delay lines) environmentally dangerous. Dr. An Wang (then of IBM) proposed a solution which took the industry by storm -- ''magnetic core memory''.
 
[[File:HP 9101A extended memory unit.- magnetic core memory module - macro detail.jpg|thumb|150px|Close up of core memory.]]
 
Core memory consisted of thousands of tiny (1–2 mm wide) donut-shaped magnetic cores, set on a grid of metal wires. By manipulating the voltages put on these wires, the state of any individual core could be read or written. Since there were no moving parts, as with a delay line or a drum, access time was much quicker. Core memory was also substantially denser than either delay-line or CRT memory, and used less power as well. It also held its content when the power was off, which was widely used at the time.
Line 26 ⟶ 29:
In addition to their compact size (for example, a unit holding 1K, a rather generous amount of the time, was a flat frame only 20x20x1 cm square), they were also rather cheap. Cores had to be assembled by hand, even in their last days (early attempts to mechanize the process failed and were abandoned once semiconductor RAM appeared), so most manufacturers used the cheap labor of East Asian seamstresses and embroiderers (who had been made redundant by the widespread adoption of sewing machines) thus making it affordable. Most [[Mainframes and Minicomputers]] used core memory, and it was ingrained into the minds of the people who worked on them to such extent that even now you can meet a situation when the word "core" is used as a synonym for RAM, even though computers in general haven't used it since [[The Seventies]].
 
=== Solid State RAM ===
 
Solid state RAM was the technology that finally ended the core era. It was an outgrowth of the attempts to miniaturize electronic circuits. Transistors had replaced vacuum tubes in early computers relatively quickly, due to their smaller size, reliability (they had no glass envelopes to break or filaments to burn out) and much lower power consumption. However, even the smallest transistors at the time were about the size of a small pencil eraser, and it took hundreds of them to make a working computer, so computers still remained bulky and expensive. In [[The Fifties]] two engineers independently figured how to put several transistors and other electronic components on the same piece of semiconductor, and thus the ''integrated circuit'' was born. The sizes of the electronic circuits started to shrink almost overnight, and one of the first applications of them in the computer industry was for RAM.
Line 37 ⟶ 40:
** On a side note, there's also an issue with security with non-volatile memory. For example, if a computer doing encryption had non-volatile memory, a clever hacker could turn off the machine, take out the memory, and do a dump without fear of losing the contents. For the same thing to happen with DRAM (it actually loses memory over time, not instantaneously), the person would have to dump the RAM chip in liquid nitrogen to slow the discharge process to a crawl.
 
=== DDR RAM ===
 
The DDR stands for "Double Data Rate". Typically, RAM processes the data once per clock cycle, while this kind of memory does it twice. It does come at the cost of slightly slower latency, but doubling the clock speed is a huge advantage for gaming. The [[Xbox]] used DDR memory. The GDDR3 variant has lower power consumption, even faster clock speeds, and allows flushing the memory to prevent dead data from clogging the chip. These advantages got that kind of memory used on the Xbox360[[Xbox 360]], the Playstation3[[PlayStation 3]], and the [[Wii]].
 
=== Rambus DRAM ===
 
"Rambus Dynamic RAM" focuses on slightly higher bandwidth, and much higher clock speed. It does come at the cost of higher power consumption, higher capacity, and slower latency. The last one has been reduced in later versions, to the point where the XDR variant on the PS3 has latency no slower than DDR memory.
Line 49 ⟶ 52:
Rambus DRAM is evidently good for video playback, hence why the PS2 and PS3 are considered such good movie players for their times. The [[PlayStation Portable]] doesn't use that kind of memory, given that the increased power consumption would drain the battery. This has meant that UMD movie playback on TVs is notably washed out. It was briefly used in the early 2000s for home PCs; however, although it was indeed blazing fast, upgrading it was way too expensive.
 
=== EDRAM ===
 
Typically, a [[Graphics Processing Unit]] does not have cache. Video memory fills that role. But "Embedded Dynamic RAM" is pretty damn close. It's stuck right next to the processor instead of inside of it. The gain is larger size (but still much smaller than standard memory), and its clock speed still matches the processor. The tradeoffs are smaller bandwidth (but still about 10 to 100 times more than standard memory), and slower latency (but still much, much faster than standard memory), and increased manufacturing cost.
Line 55 ⟶ 58:
The greatly increased speed means that the video memory can handle about the same amount of data than standard memory of a much larger size. The PS2, [[Game Cube]], Wii, and 360 all use this kind of memory. People thought the size was too small, but once they caught on, the systems ran just fine with that memory. Future systems with the Cell Processor plan on using this kind of memory, as it's practically fast enough to keep up with that processor.
 
=== IT-SRAM ===
 
Probably a good middle ground between standard memory and embedded memory. It has average clock speed and bandwidth, but also average capacity, combined with a latency only slightly slower than EDRAM. The Game Cube and the Wii use this kind of memory (yes the Wii uses this memory, EDRAM, and GDDR3 all at once).
 
'''==By usage:'''==
 
=== Registers ===
 
This is the fastest type of memory available. All processors typically have registers, which is normally very fast static RAM. These store about a word of data, the number of bits the processor can handle at once. The most important one is the instruction counter, as it holds where the next instruction is. Another one that's always found is the status word. Others may be present and some can be used by the user or not. Interestingly enough, complex instruction set processors have relatively few registers. Reduced instruction set processors easily have over 100. For example, most modern x86 CPUs, being a very advanced RISC machines internally, use automatic renaming of their ~128 internal registers to simulate several sets of the 14 traditional x86 registers, thus allowing several CISC instructions to be run at once.
 
=== Cache ===
 
Just in case you are wondering, it's pronounced "cash", not "ca-shay". This is memory stuck right in the CPU. Why? Well often the CPU needs to store data for certain processing. It doesn't need to store a lot, but it needs to store it in memory as fast as possible. Cache fills that purpose. By sticking it right inside the processor, the latency is no slower than the processor's, and the clock speed matches. It does mean that the cache can only be so large. The 360 has the most cache of any home consoles, and it's just 1 megabyte in size. But it's the speed that counts, since it's designed to keep up with the CPU.
Line 71 ⟶ 74:
{{reflist}}
[[Category:How Video Game Specs Work]]
[[Category:RandomPages Accesswith Memoryworking Wikipedia tabs]]
[[Category:{{PAGENAME}}]]