We examined the first digital memory of the computer, see Computer History – Core Memory, and we mentioned that the current standard RAM (Random Access Memory) is a chip. This is consistent with the application of Moore's law, frequently cited (Gordon Moore was one of the founders of Intel). It states that the density of components on integrated circuits, which can be paraphrased as performance per unit cost, doubles every 18 months. The initial memory had cycle times in microseconds, today we are talking in nanoseconds.
You may know the term cache, applied to PCs. This is one of the performance features mentioned when you talk about the latest processor or hard drive. You can have an L1 or L2 cache on the processor and a disk cache of different sizes. Some programs also have a cache, also called a buffer, for example, when writing data to a CD burner. The first CD burner programs had "overruns". The end result was a good supply of coasters!
Mainframe systems have used the cache for many years. The concept became popular in the 1970s to accelerate the time of access to memory. This was the moment when basic memory was being eliminated and replaced by integrated circuits or chips. Although chips are much more efficient in terms of physical space, they have other problems of reliability and heat generation. The chips of a certain design were faster, warmer and more expensive than the chips of another design, which were cheaper, but slower. Speed has always been one of the most important factors in computer sales, and design engineers have always looked for ways to improve performance.
machine. Of course, one of the great benefits of the computer program is that it can be "plugged in" or "skipped" out of sequence – about another article in this series. However, there is still enough time for an instruction to make a useful addition to the computer.
The basic idea of the cache is to predict what data is required from the memory to be processed in the CPU. Consider a program that is composed of a series of instructions, each of which is stored in a memory location, say from the 100 address up. The instruction at location 100 is read into memory and executed by the CPU, and the next instruction is read from location 101 and executed, then 102, 103, and so on.
If the memory in question is a central memory, 1 microsecond to read an instruction. If the processor takes, say 100 nanoseconds to execute the instruction, it must then wait 900 nanoseconds for the next instruction (1 microsecond = 1000 nanoseconds). The effective repetition rate of the processor is 1 microsecond. (The times and speeds quoted are typical, but do not refer to any specific material, but simply give an illustration of the principles involved).