Does hard drive cache affect speed? What is cache in hard drives? Disk geometry, platter and recording density

The hard drive (hard drive, HDD) is one of the very important parts of the computer. After all, if the processor, video card, etc. breaks down. You only regret losing money for new purchase, if your hard drive breaks down, you risk losing irretrievably important data. Also from hard drive The speed of the computer as a whole also depends. Let's figure out how to choose the right one hard drive.

Hard drive tasks

The job of a hard drive inside a computer is to store and retrieve information very quickly. The hard drive is an amazing invention of the computer industry. Using the laws of physics, this small device stores an almost unlimited amount of information.

Hard drive type

IDE - obsolete hard drives are used for connecting to old motherboards.

SATA - replaced IDE hard drives and has a higher data transfer speed.

SATA interfaces are different models, they differ from each other in the speed of data exchange and support different technologies:

  • SATA has a transfer speed of up to 150Mb/s.
  • SATA II - has a transfer speed of up to 300Mb/s
  • SATA III - has a transfer speed of up to 600Mb/s

SATA-3 began to be produced not long ago, from the beginning of 2010. When purchasing such a hard drive, you need to pay attention to the year of manufacture of your computer (without an upgrade); if it is lower than this date, then this hard drive will not suit you! HDD - SATA, SATA 2 have the same connection connectors and are compatible with each other.

Hard disk capacity

The most common hard drives used by most users at home have a capacity of: 250, 320, 500 gigabytes. There are even fewer, but they are becoming increasingly rare - 120, 80 gigabytes, and they are no longer on sale at all. For storage possibilities it is very a lot of information There are 1, 2, 4 terabyte hard drives.

Hard drive speed and cache

When choosing a hard drive, it is important to pay attention to its operating speed (spindle speed). The speed of the entire computer will depend on this. Common disc speeds are 5400 and 7200 rpm.

Buffer memory volume (cache memory) - physical hard memory disk. There are several sizes of such memory: 8, 16, 32, 64 megabytes. The higher the speed of the hard drive's RAM, the faster the data transfer speed will be.

In conclusion

Before purchasing, check which hard drive is suitable for your motherboard: IDE, SATA or SATA 3. We look at the characteristics of the disk rotation speed and the amount of buffer memory, these are the main indicators that you need to pay attention to. We also look at the manufacturer and the volume that suits you.

We wish you happy shopping!

Share your choice in the comments, it will help other users make right choice!



xn----8sbabec6fbqes7h.xn--p1ai

System administration and much more

Using a cache increases the performance of any hard drive, reducing the number of physical accesses to the disk, and also allows the hard drive to operate even when the host bus is busy. Most modern drives have a cache size of 8 to 64 megabytes. This is even more than the hard drive capacity of an average computer from the nineties of the last century.

Despite the fact that the cache increases the speed of the drive in the system, it also has its disadvantages. To begin with, the cache does not speed up the drive at all during random requests for information located at different ends of the platter, since with such requests there is no point in prefetching. Also, the cache does not help at all when reading large amounts of data, because it is usually quite small, for example, when copying an 80 megabyte file, with the usual 16 megabyte buffer in our time, only a little less than 20% of the copied file will fit into the cache.

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. To begin with, the cache does not speed up the drive at all during random requests for information located at different ends of the platter, since with such requests there is no point in prefetching. Also, it does not help at all when reading large amounts of data, because... it is usually quite small. For example, when copying an 80 megabyte file, with a 16 megabyte buffer that is common today, only slightly less than 20% of the copied file will fit in the cache.

In recent years, manufacturers hard drives significantly increased the cache capacity in their products. Even in the late 90s, 256 kilobytes was the standard for all drives and only high-end devices had a 512 kilobyte cache. Currently, a cache of 8 megabytes has become the de facto standard for all drives, while the most productive models have capacities of 32 or even 64 megabytes. There are two reasons why the drive buffer has grown so quickly. One of them is a sharp decline in prices for synchronous memory chips. The second reason is the belief of users that doubling or even quadrupling the cache size will greatly affect the speed of the drive.

The size of the hard drive cache, of course, affects the speed of the drive in the operating system, but not as much as users imagine. Manufacturers take advantage of the user's faith in the cache size, and in advertising brochures they make loud statements about the cache size being quadruple compared to the standard model. However, comparing the same hard drive with buffer sizes of 16 and 64 megabytes, it turns out that the speedup results in several percent. What does this lead to? Moreover, only a very large difference in cache sizes (for example, between 512 kilobytes and 64 megabytes) will significantly affect the speed of the drive. You also need to remember that the size of the hard drive buffer compared to the computer memory is quite small, and often the “software” cache, that is, an intermediate buffer organized by the operating system for caching operations with file system and located in the computer's memory.

Fortunately, there is a faster way for the cache to work: the computer writes data to the drive, it goes into the cache, and the drive immediately responds to the system that the write has been completed; the computer continues to work, believing that the drive was able to write data very quickly, while the drive “deceived” the computer and only wrote the necessary data to the cache, and only then began writing it to the disk. This technology is called write-back caching.

Because of this risk, some workstations do not cache at all. Modern drives allow you to disable write caching mode. This is especially important in applications where data accuracy is very critical. Because this type While caching greatly increases the speed of the drive, they usually resort to other methods that reduce the risk of data loss due to a power outage. The most common method is to connect a computer to the unit uninterruptible power supply. In addition, all modern drives have a “flush write cache” function, which forces the drive to write data from the cache to the surface, but the system has to execute this command blindly, because it still doesn't know whether there is data in the cache or not. Every time the power is turned off, modern operating systems send this command to the hard drive, then a command is sent to park the heads (although this command could not be sent, since every modern drive automatically parks the heads when the voltage drops below the maximum permissible level ) and only after that the computer turns off. This guarantees the safety of user data and proper shutdown of the hard drive.

sysadminstvo.ru

Hard drive cache

05.09.2005

All modern drives have a built-in cache, also called a buffer. The purpose of this cache is different from the processor cache. The function of the cache is to buffer between fast and slow devices. In the case of hard drives, the cache is used to temporarily store the results of the last read from the disk, as well as to prefetch information that may be requested somewhat later, for example, several sectors after the currently requested sector.

Using a cache increases the performance of any hard drive, reducing the number of physical accesses to the disk, and also allows the hard drive to operate even when the host bus is busy. Most modern drives have a cache size of 2 to 8 megabytes. However, the most advanced SCSI drives have a cache capacity of 16 megabytes, which is even more than the average computer of the nineties of the last century.

It should be noted that when someone talks about disk cache, most often they mean not exactly the hard drive cache, but a certain buffer allocated by the operating system to speed up read-write procedures in this particular operating system.

The reason why hard drive cache is so important is because of the big difference between the speed of the hard drive itself and the speed of the hard drive interface. When searching for the sector we need, whole milliseconds pass, because Time is spent moving the head and waiting for the desired sector. In modern personal computers, even one millisecond is a lot. On a typical IDE/ATA drive, the time it takes to transfer a 16-kilobyte block of data from the cache to the computer is about hundreds of times faster than the time it takes to find and read it from the surface. This is why all hard drives have internal cache.

Another situation is writing data to disk. Let's assume that we need to write the same 16-kilobyte data block, having a cache. The hard drive instantly transfers this block of data to the internal cache and reports to the system that it is again free for requests, while at the same time writing data to the surface of the magnetic disks. In the case of sequential reading of sectors from the surface, the cache no longer plays a big role, because Sequential read speeds and interface speeds in this case are approximately the same.

General concepts of hard disk cache operation

The simplest principle of cache operation is to store data not only of the requested sector, but also of several sectors after it. As a rule, reading from a hard drive occurs not in blocks of 512 bytes, but in blocks of 4096 bytes (a cluster, although the cluster size may vary). The cache is divided into segments, each of which can store one block of data. When a request to a hard drive occurs, the drive controller first checks whether the requested data is in the cache, and, if so, it instantly serves it to the computer without physically accessing the surface. If there was no data in the cache, it is first read and entered into the cache, and only after that is transferred to the computer. Because The cache size is limited; cache pieces are constantly updated. Typically, the oldest piece is replaced with a new one. This is called a circular buffer, or circular cache.

To increase the performance of the drive, manufacturers have come up with several methods for increasing the speed of operation using the cache:

  1. Adaptive segmentation. Typically the cache is divided into segments same size. Since requests may have different size, this leads to unnecessary consumption of cache blocks, because one request will be divided into segments of a fixed length. Many modern drives dynamically change the segment size by detecting the request size and adjusting the segment size to the specific request, thereby increasing efficiency and increasing or decreasing the segment size. The number of segments may also change. This task is more complex than operations with fixed-length segments and can lead to fragmentation of data within the cache, increasing the load on the hard drive microprocessor.
  2. Presampling. The microprocessor of the hard drive based on the analysis of the requested data in present moment and requests at previous times, loads into the cache data that has not yet been requested, but has a high probability of being so. The simplest case of prefetching is loading additional data into the cache that lies a little further than the currently requested data, because statistically they are more likely to be requested later. If the prefetch algorithm is implemented correctly in the drive’s firmware, this will increase the speed of its operation in various file systems and with various types data.
  3. User control. High-tech hard drives have a set of commands that allow the user to precisely control all cache operations. These commands include the following: enabling and disabling the cache, controlling the size of segments, enabling and disabling adaptive segmentation and prefetching, etc.

Despite the fact that the cache increases the speed of the drive in the system, it also has its disadvantages. To begin with, the cache does not speed up the drive at all during random requests for information located at different ends of the platter, since with such requests there is no point in prefetching. Also, the cache does not help at all when reading large amounts of data, because it is usually quite small, for example, when copying a 10 megabyte file, with the usual 2 megabyte buffer in our time, only a little less than 20% of the copied file will fit into the cache.

Due to these and other features of the cache, it does not speed up the drive as much as we would like. The speed gain it gives depends not only on the size of the buffer, but also on the algorithm for working with the microprocessor cache, as well as on the type of files being worked with in at the moment. And, as a rule, it is very difficult to find out which cache algorithms are used in a given specific drive.

The figure shows the cache chip of the Seagate Barracuda drive; it has a capacity of 4 megabits or 512 kilobytes.

Caching read-write operations

Despite the fact that the cache increases the speed of the drive in the system, it also has its drawbacks. To begin with, the cache does not speed up the drive at all during random requests for information located at different ends of the platter, since with such requests there is no point in prefetching. Also, it does not help at all when reading large amounts of data, because... it is usually quite small. For example, when copying a 10 megabyte file, with the usual 2 megabyte buffer in our time, only slightly less than 20% of the copied file will fit in the cache.

Due to these features of the cache, it does not speed up the drive as much as we would like. The speed gain it provides depends not only on the size of the buffer, but also on the algorithm for working with the microprocessor cache, as well as on the type of files being worked with at the moment. And, as a rule, it is very difficult to find out which cache algorithms are used in a given specific drive.

In recent years, hard drive manufacturers have significantly increased the cache capacity of their products. Even in the late 90s, 256 kilobytes was the standard for all drives and only high-end devices had a 512 kilobyte cache. Currently, a 2 MB cache has become the de facto standard for all drives, while the most productive models have capacities of 8 or even 16 MB. As a rule, 16 megabytes are found only on SCSI drives. There are two reasons why the drive buffer has grown so quickly. One of them is a sharp decline in prices for synchronous memory chips. The second reason is the belief of users that doubling or even quadrupling the cache size will greatly affect the speed of the drive.

The size of the hard drive cache, of course, affects the speed of the drive in the operating system, but not as much as users imagine. Manufacturers take advantage of the user's faith in the cache size, and in advertising brochures they make loud statements about the cache size being quadruple compared to the standard model. However, comparing the same hard drive with buffer sizes of 2 and 8 megabytes, it turns out that the speedup results in several percent. What does this lead to? Moreover, only a very large difference in cache sizes (for example, between 512 kilobytes and 8 megabytes) will significantly affect the speed of the drive. You also need to remember that the size of the hard drive buffer compared to the computer memory is quite small, and often a greater contribution to the operation of the drive is made by the “software” cache, that is, an intermediate buffer organized by the operating system for caching operations with the file system and located in the computer memory .

Read caching and write caching are similar in some ways, but they also have many differences. Both of these operations are intended to increase the overall performance of the drive: these are buffers between fast computer and slow drive mechanics. The main difference between these operations is that one of them does not change the data on the drive, while the other does.

Without caching, each write operation would result in a tedious wait for the heads to move to the right place and data to be written to the surface. Working with a computer would be impossible: as we mentioned earlier, this operation on most hard drives would take at least 10 milliseconds, which is a lot from the point of view of computer operation as a whole, since the computer's microprocessor would have to wait for these 10 milliseconds each time information is written to the hard drive. The most striking thing is that there is precisely this mode of operation with the cache, when data is simultaneously written to both the cache and the surface, and the system waits for both operations to be completed. This is called write-through caching. This technology provides faster work if the newly recorded data needs to be read back into the computer in the near future, and the recording itself takes much longer than the time after which the computer will need this data.

Fortunately, there is a faster way for the cache to work: the computer writes data to the drive, it goes into the cache, and the drive immediately responds to the system that the write has been completed; the computer continues to work, believing that the drive was able to write data very quickly, while the drive “deceived” the computer and only wrote the necessary data to the cache, and only then began writing it to the disk. This technology is called write-back caching.

Of course, write-back caching technology increases performance, but, nevertheless, this technology also has its drawbacks. The hard drive tells the computer that a write has already been made, while the data is only in the cache, and only then begins to write the data to the surface. This takes some time. This is not a problem as long as the computer has power. Because Cache memory is volatile memory; the moment the power is turned off, the entire contents of the cache are irretrievably lost. If there was data in the cache waiting to be written to the surface and the power was turned off, the data will be lost forever. And, what is also bad, the system does not know whether the data was accurately written to the disk, because... Winchester has already reported that it did this. Thus, we not only lose the data itself, but we also don’t know what data was not recorded, and we don’t even know that a failure occurred. As a result, a part of the file may be lost, which will lead to a violation of its integrity, loss of operating system functionality, etc. Of course, this issue does not affect read caching.

Because of this risk, some workstations do not cache at all. Modern drives allow you to disable write caching mode. This is especially important in applications where data accuracy is very critical. Because This type of caching greatly increases the speed of the drive; however, they usually resort to other methods that reduce the risk of data loss due to a power outage. The most common method is to connect the computer to an uninterruptible power supply. In addition, all modern drives have the “flush write cache” function, which forces the drive to write data from the cache to the surface, but the system has to execute this command blindly, because it still doesn't know whether there is data in the cache or not. Every time the power is turned off, modern operating systems send this command to the hard drive, then a command is sent to park the heads (although this command could not be sent, since every modern drive automatically parks the heads when the voltage drops below the maximum permissible level ) and only after that the computer turns off. This guarantees the safety of user data and proper shutdown of the hard drive.

spas-info.ru

What is a hard disk buffer and why is it needed?

Today, a common storage device is a magnetic hard drive. It has a certain amount of memory designed to store basic data. It also has a buffer memory, the purpose of which is to store intermediate data. Professionals call the hard disk buffer the term “cache memory” or simply “cache”. Let's figure out why the HDD buffer is needed, what it affects and what size it is.

The hard disk buffer helps the operating system temporarily store data that was read from the main memory of the hard drive, but was not transferred for processing. The need for transit storage is due to the fact that the speed of reading information from the HDD drive and the OS throughput vary significantly. Therefore, the computer needs to temporarily store data in a “cache”, and only then use it for its intended purpose.

The hard disk buffer itself is not separate sectors, as incompetent computer users believe. It is a special memory chip located on the internal HDD board. Such chips can operate much faster than the drive itself. As a result, they cause an increase (by several percent) in computer performance observed during operation.

It is worth noting that the size of “cache memory” depends on specific model disk. Previously, it was about 8 megabytes, and this figure was considered satisfactory. However, with the development of technology, manufacturers were able to produce chips with larger amounts of memory. Therefore, most modern hard drives have a buffer whose size varies from 32 to 128 megabytes. Of course, the largest “cache” is installed in expensive models.

What impact does a hard drive buffer have on performance?

Now we’ll tell you why the size of the hard drive buffer affects computer performance. Theoretically, the more information is in the “cache memory”, the less often operating system will contact the hard drive. This is especially true for a work scenario where a potential user is processing a large number of small files. They simply move to the hard drive buffer and wait there for their turn.

However, if the PC is used to process large files, then the “cache” loses its relevance. After all, information cannot fit on microcircuits, the volume of which is small. As a result, the user will not notice an increase in computer performance, since the buffer will be practically not used. This happens in cases where the operating system will run programs for editing video files, etc.

Thus, when purchasing a new hard drive, it is recommended to pay attention to the size of the “cache” only in cases where you plan to constantly process small files. Then you will really notice an increase in your productivity. personal computer. But if the PC is used for ordinary everyday tasks or processing large files, then you don’t need to attach any importance to the clipboard.

The hard drive cache is a temporary storage of data.
If you have a modern hard drive, then the cache is not as important as it used to be.
Learn more about what role cache plays in hard drives and what cache size should be for fast work computer, you will find later in the article.

What is cache used for?

The hard drive cache allows you to store frequently used data in a specially designated location. Accordingly, the cache size determines the capacity of the stored data. Thanks to the large cache, performance work hard disk space can increase significantly, because frequently used data can be loaded into the hard disk cache, which will not require physical reading when requested.
Physical reading is a direct appeal to sectors of hard disk. It takes a fairly significant period of time, measured in milliseconds. At the same time, the hard drive cache transfers information upon request approximately 100 times faster than if the information was requested by physically accessing the hard drive. Thus, the hard disk cache allows the hard drive to work even if the host bus is busy.

Along with the importance of the cache, we should not forget about other characteristics of hard disk, and sometimes the cache size can be neglected. If you compare two hard drives of the same size with different cache sizes, for example 8 and 16 MB, then choosing in favor of a larger cache is worth making only if their price difference is approximately $7-$12. Otherwise, it makes no sense to overpay money for a larger cache volume.

The cache is worth a look if you buy gaming computer and there are no trifles for you, in this case you also need to look at the speed.

Summarizing all of the above

The advantages of the cache are that data processing does not take a long time, whereas during physical access to a certain sector, time must pass until the disk head finds the desired piece of information and starts reading. In addition, hard drives with large cache sizes can significantly relieve the computer processor because physical access is not required to request information from the cache. Accordingly, the processor work here is minimal.

The hard drive cache can be called a real accelerator, because its buffering function really allows the hard drive to work much faster and more efficiently. However, in the context of the rapid development of high technologies, the former value of the hard disk cache is not very important since in most modern models a cache of 8 or 16 MB is used, which is quite enough for optimal performance hard drive.

Today there are hard drives with an even larger 32 MB cache, but as we said, it's only worth paying extra for the difference if the difference in price corresponds to the difference in performance.

Greetings, dear readers! In normal people, whose consciousness has not yet been clouded by acquaintance with computer technologies, when you hear the word “Winchester,” the first association that arises is the famous hunting rifle, extremely popular in the USA. Computer scientists have completely different associations - that’s what most of us call a hard drive.

In today's publication, we will look at what hard drive buffer memory is, what it is needed for, and how important this parameter is for performing various tasks.

How a hard drive works

HDD is essentially a drive on which everything is stored. user files, as well as the operating system itself. Theoretically, you can do without this detail, but then the OS will have to be loaded from removable media or over a network connection, and working documents will have to be stored on a remote server.

The base of the hard drive is a round aluminum or glass plate. It has a sufficient degree of rigidity, which is why the part is called hard drive. The plate is covered with a layer of ferromagnetic material (usually chromium dioxide), the clusters of which remember one or zero due to magnetization and demagnetization. There can be several such plates on one axis. A small high-speed electric motor is used for rotation.

Unlike a gramophone, in which the needle touches the record, the read heads are not adjacent to the disks, leaving a distance of several nanometers. Due to the absence of mechanical contact, the service life of such a device increases.

However, no part lasts forever: over time, the ferromagnet loses its properties, which means it leads to a loss of hard disk space, usually along with user files.

That is why, for important or dear data (for example, a family photo archive or the fruits of the computer owner’s creativity), it is recommended to do backup copy, or better yet several at once.

What is cache

Buffer memory or cache is a special type of RAM, a kind of “layer” between the magnetic disk and the PC components that process the data stored on the hard drive. It is designed for smoother reading of information and storage of data that is currently most often accessed by the user or the operating system.

What the size of the cache affects: the larger the amount of data that fits in it, the less often the computer has to access the hard drive. Accordingly, the productivity of such workstation(as you already know, in terms of performance, the magnetic disk of a hard drive is significantly inferior to the RAM chip), and also, indirectly, the service life of the hard drive.

Indirectly because different users use the hard drive in different ways: for example, a movie lover who watches them in an online cinema through a browser will theoretically have a hard drive that will last longer than a movie fan who downloads movies by torrent and watches them using a video player.

Can you guess why? That's right, due to the limited number of cycles of rewriting information on the HDD.

How to view the buffer size

Before you can see the cache size, you will have to download and install the HD Tune utility. After starting the program, the parameter of interest can be found in the “Information” tab at the bottom of the page.

Optimal sizes for various tasks

A logical question arises: which buffer memory is best for home computer and what does this give in practical terms? Naturally, preferably more. However, the hard drive manufacturers themselves impose restrictions on the user: for example, a hard drive with 128 MB of buffer memory will cost significantly above average.

This is the cache size that I recommend focusing on if you want to build a gaming computer that will not become outdated in a couple of years. For simpler tasks, you can get by with simpler characteristics: 64 MB is enough for a home media center. And for a computer that is used purely for surfing the Internet and running office applications and simple flash games, a buffer memory of 32 MB is quite enough.

As a “golden mean,” I can recommend the Toshiba P300 1TB 7200rpm 64MB HDWD110UZSVA 3.5 SATA III hard drive - the cache size here is average, but the capacity of the hard drive itself is quite enough for a home PC. Also, to complete the picture, I recommend that you read the publications of disks and disks, as well as which ones are on hard disks.

A personal collection of digital data tends to grow exponentially over time. Over the years, the amount of data in the form of thousands of songs, films, photographs, documents, all kinds of video courses is constantly growing and they, naturally, must be stored somewhere. computer or, no matter how large it is, will still someday completely run out of free space.

An obvious solution to the problem of lack of storage space is to buy DVDs, USB flash drives or external hard disk (HDD). Flash drives usually provide several GB of disk space, but they are definitely not suitable for long-term storage, and their price-volume ratio is, to put it mildly, not the best. DVDs are a good option in terms of price, but not convenient in terms of recording, rewriting and deleting unnecessary data, but they are slowly dying out and becoming an outdated technology. An external HDD provides a large amount of space, is portable, easy to use, and is perfect for long-term data storage.

When buying an external HDD, in order to make the right choice, you need to know what to look for first. In this article we will tell you what criteria you need to follow when choosing and purchasing an external hard drive.

What to look for when buying an external hard drive

Let's start with choosing a brand, the best of them are Maxtor Seagate Iomega LaCie Toshiba And Western Digital l.
The most important characteristics to pay attention to when purchasing:

Capacity

The amount of disk space is the first thing to consider. The basic rule that you should follow when purchasing is to multiply the capacity you need by three. For example, if you think that 250 GB of additional hard drive space is enough, buy a model from 750 GB. Drives with a large amount of storage space tend to be quite bulky, which affects their mobile capabilities; this should also be taken into account by those who frequently carry external storage with you. For desktop computers, models with disk space of several terabytes are available for sale.

Form factor

The form factor determines the size of the device. Currently, form factors 2.5 and 3.5 are used for external HDDs.
2.5 form factors (size in inches) - smaller in size, light weight, receives power from the port, compact, mobile.
3.5 form factors are larger in size, have additional power supply, are quite heavy (often more than 1 kg), and have a large amount of disk space. Pay attention to the mains power supply, because... if you plan to connect the device to a weak laptop, then it may not be able to spin up the disk - and the disk simply will not work.

Rotation speed (RPM)

The second important factor to consider is the rotation speed of the disk, indicated in RPM (revolutions per minute). High speed ensures fast data reading and high writing speed. Any HDD with a disk rotation speed of 7200 RPM or more is good choice. If speed is not critical for you, then you can choose a model with 5400 RPM; they are quieter and heat up less.

Cache size

Every external HDD has a buffer or cache that temporarily holds data before it goes to disk. Drives with larger caches transfer data faster than those with smaller caches. Choose a model that has at least 16 MB of cache memory, preferably more.

Interface

Apart from the above factors, another important feature is the type of interface used for data transfer. The most common is USB 2.0. USB 3.0 is gaining popularity, the new generation has significantly increased data transfer speeds, and models with FireWire and ESATA interfaces are also available. We recommend choosing models with USB 3.0 and ESATA interfaces, which have high data transfer speeds, provided that your computer is equipped with the appropriate ports. If the ability to connect an external hard drive to as many devices as possible is critical for you, choose a model with the version USB interface 2.0.

Cache memory- This is ultra-fast memory, which has increased performance compared to RAM.

Cache memory complements the functional value of RAM.
When a computer is running, all calculations occur in the processor, and the data for these calculations and their results are stored in RAM. The speed of the processor is several times higher than the speed of information exchange with RAM. Considering that between two processor operations one or more operations can be performed on slower memory, we find that the processor must be idle from time to time and the overall speed of the computer drops.

The cache memory is controlled by a special controller, which, by analyzing the program being executed, tries to predict what data and commands the processor will most likely need in the near future, and pumps them into the cache memory, i.e. The cache controller loads the necessary data from RAM into the cache memory, and returns, when necessary, data modified by the processor to RAM.

The processor cache performs approximately the same function as RAM. Only the cache is memory built into the processor and is therefore faster than RAM, partly due to its position. After all, the lines of communication running along motherboard, and the connector have a detrimental effect on speed. The cache of a modern personal computer is located directly on the processor, thanks to which it has been possible to shorten communication lines and improve their parameters.

Cache memory is used by the processor to store information. It buffers the most frequently used data, due to which the time of the next access to it is significantly reduced.

All modern processors have a cache (in English - cache) - an array of ultra-fast RAM, which is a buffer between the controller of the relatively slow system memory and processor. This buffer stores blocks of data that the CPU is currently working with, thereby significantly reducing the number of processor calls to the extremely slow (compared to the processor speed) system memory.

This significantly increases the overall performance of the processor.
Moreover, in modern processors the cache is no longer a single memory array, as before, but is divided into several levels. The fastest, but relatively small in size, first-level cache (denoted as L1), with which the processor core operates, is most often divided into two halves - the instruction cache and the data cache. The second level cache interacts with the L1 cache - L2, which, as a rule, is much larger in volume and is mixed, without dividing into an instruction cache and a data cache.

Some desktop processors, following the example of server processors, also sometimes acquire a third-level L3 cache. The L3 cache is usually even larger in size, although somewhat slower than L2 (due to the fact that the bus between L2 and L3 is narrower than the bus between L1 and L2), but its speed, in any case, is disproportionately higher than the speed system memory.

There are two types of cache: exclusive and non-inclusive cache. In the first case, information in caches of all levels is clearly demarcated - each of them contains exclusively original information, while in the case of a non-exclusive cache, information can be duplicated at all caching levels. Today it is difficult to say which of these two schemes is more correct - both have both minuses and pluses. An exclusive caching scheme is used in AMD processors, while non-exclusive - in Intel processors.

Exclusive cache memory

Exclusive cache memory assumes the uniqueness of the information located in L1 and L2.
When reading information from RAM into the cache, the information is immediately entered into L1. When L1 is full, information is transferred from L1 to L2.
If, when the processor reads information from L1, the necessary information is not found, then it is looked for in L2. If the necessary information is found in L2, then the first and second level caches exchange lines with each other (the “oldest” line from L1 is placed in L2, and the required line from L2 is written in its place). If the necessary information is not found in L2, then the access goes to the RAM.
The exclusive architecture is used in systems where the difference between the volumes of the first and second level caches is relatively small.

Inclusive cache

An inclusive architecture involves duplication of information found in L1 and L2.
The scheme of work is as follows. When copying information from RAM to the cache, two copies are made, one copy is stored in L2, the other copy is stored in L1. When L1 is completely full, the information is replaced according to the principle of removing the “oldest data” - LRU (Least-Recently Used). The same thing happens with the second level cache, but since its volume is larger, the information is stored in it longer.

When the processor reads information from the cache, it is taken from L1. If the necessary information is not in the first level cache, then it is looked for in L2. If the necessary information is found in the second level cache, it is duplicated in L1 (using the LRU principle), and then transferred to the processor. If the necessary information is not found in the second level cache, then it is read from RAM.
The inclusive architecture is used in those systems where the difference in the size of the first and second level caches is large.

However, cache memory is ineffective when working with large amounts of data (video, sound, graphics, archives). Such files simply do not fit in the cache, so you have to constantly access RAM, or even the HDD. In such cases, all the advantages disappear. Because budget processors(for example, Intel Celeron) with a reduced cache are so popular that the performance in multimedia tasks (related to processing large amounts of data) is not greatly affected by the cache size, even despite the reduced operating frequency Intel buses Celeron.

Hard drive cache

As a rule, all modern hard drives have their own RAM, called cache memory or simply cache. Hard drive manufacturers often refer to this memory as buffer memory. Cache size and structure from manufacturers and for various models hard drives differ significantly.

Cache memory acts as a buffer for storing intermediate data that has already been read from the hard drive, but has not yet been transferred for further processing, as well as for storing data that the system accesses quite often. The need for transit storage is caused by the difference between the speed of reading data from the hard drive and throughput systems.

Typically, cache memory is used for both writing and reading data, but on SCSI drives it is sometimes necessary to force write caching, so disk write caching is usually disabled by default for SCSI. Although this contradicts the above, the size of the cache memory is not decisive for improving performance.

It is more important to organize data exchange with the cache to increase disk performance as a whole.
In addition, performance in general is affected by the operating algorithms of the control electronics, which prevent errors when working with the buffer (storing irrelevant data, segmentation, etc.)

In theory: the larger the cache memory, the higher the likelihood that the necessary data is in the buffer and there will be no need to “disturb” the hard drive. But in practice, it happens that a disk with a large amount of cache memory is not much different in performance from a hard disk with a smaller amount; this happens when working with large files.

Choice