I know that registers serve as storage units for the CPU to access data from in order to execute an instruction. The assembly language for these instructions look something like ADD R2, R1, R3 essentially asking us to add contents in R1 and R3 and place it in R2. My question is, how does data get into the registers R1 and R3 so that the CPU can use those values to compute and store the result in R2? And if all registers get full, is data evicted to main memory from the registers using an LRU method similar to how data from caches are evicted?
>Solution :
Values get both into and out of registers using machine code instructions; machine code instructions can:
- Enter constants from the program into registers or memory
- These are usually called load immediate or move
- Enter user input like keystrokes into registers or memory
- some processors have input instructions, others load from memory mapped I/O
- Load data from memory, where long-lived variables & data structures are located
- Send register values to memory
- to update data structures and other long-lived variables
- Send data to an output device (like a console)
- Compute new values given existing values
And if all registers get full
It depends somewhat on what you mean by full. In one sense, the registers always hold values, there’s no notion in the CPU of free / busy architecturally specified registers (modulo some deep concepts on floating point & vector registers, and internal implementation details of OoO processors and register renaming). Similar is true for your hard drive. It has exactly N gigabytes of storage, as far as the hardware is concerned, this number never grows or shrinks.
And if all registers get full, is data evicted to main memory from the registers using an LRU method similar to how data from caches are evicted?
Yes, but virtually 100% under program control: the program knows what logical variables from our algorithms are in what CPU registers (and others in what memory locations), so there is a notion of an algorithm translated to assembly having either sufficient registers (leaving some unused) and also of wanting more registers than are available. When that happens (desiring more than available), compiler writers and assembly programmers simply turn to memory for the overflow, writing machine code instructions to transfer data back & forth as needed. Unimportant things can live in memory and suffer slower access times, allowing the CPU registers to be used as desired.
(In the hard drive analogy, all the bits are always there but not always in meaningful use.)
To be clear, the hardware has many LRU algorithms, but these generally surround cache architectures, including L1, L2, and TLB.