Asynchronous programs showing locality of reference? -


I was reading this excellent article which introduces about asynchronous programming and came to me in the following line that makes me difficult Sounds to understand.

Since there is no real similarity (in asnyc), it appears from our diagram that an asynchronous program will take longer to execute as a synchronous one, perhaps asynchronous program As the poorer areas of the context may appear

Can anyone say that the place of reference here comes in the picture?

The area of ​​the context, as mentioned in Wikipedia paragraph, is an observation that when some data It is accessed (on disk, in memory, whatever), other data is used as well as near that place, this observation is understood because developers add similar data together. Since data is related, they are often processed together. In particular, it is known as spatial location.

For a weak example, calculating the sum of an array or multiplying the matrix Data representing the array or matrix is ​​usually stored in casual memory places, and for this example, a Once you reach a specific place in the archive, you can also access other people as well.

In the context of computer architecture context, there is the perception of "page" in the terrain operating system, which is (approximately) the amount of 4KB data that is individually (transferred between physical memory and disk) and in can be done. When you touch some memory that is not a resident (physically not in RAM), then the entire page of the OS disk and the memory will be put in it. The reason for this is situationality: You have the possibility to touch other data around whatever you touch.

In addition, the CPU has the concept of cash. For example, a CPU can have L1 (Level 1) cache, which is actually a big block of on-cpu data that can be faster than CPU RAM. If the value is in L1 cache then CPU will be used instead of going out of RAM. After the principle of reference area, when a CPU uses some value in the main memory, then it will bring that value and take all the values ​​in it to the L1 cache. This set of values ​​is known as the cash line, the size of the cache lines varies, but the thing is that when you access the first value of an array, the CPU may need to get it from RAM, but after Access (closer to proximity) will be sharp because the CPU has brought the whole bundle value in the L1 cache on the first access.

So, to answer your question: If you imagine a synchronous process that calculates the sum of a very large array, then touch the memory places in sequence after each other Will do In this case, your area is good Asynchronous, you may have a n thread one piece of each array (size 1 / n ) and the sub -Unogs each thread is touching a potentially different location in memory (because the array is large) and since each thread can be switched in and out of performance, the actual form of data access from the OS or CPU's perspective The Raab. The L1 cache on the CPU is limited, so if the thread 1 brings a cache line (due to one access), it can eject the cache line of thread 2. Then, when thread 2 reaches its array value, then go to the RAM, which will bring its cache line back and potentially oust the cash line of thread 1, and so on. Depending on the overall system resources and uses, this pattern may also occur at the OS / page level.


Comments