1. How can rotate left through carry be implemented with an addc instruction?

Explain how ADDC worked, that was the answer to the problem.
Adding to itself is the same as multiplying by two, which is the same as a left shift. Then you add the carry bit.


sorry its rly hard to take notes on this


Assignment 2


- CPU has another level of memory, cache, which is much faster than primary memory.
- Cache is a limited size; if the desired data is in the cache, it is considered a ‘hit’ otherwise a ‘miss’

There are three types of memory access:
fetch, load, store
Cache can refer to any memory location.
- When we do a fetch, load, or store, we go through the cache.
- If the value is in the cache,
- Fetch: first go to the cache. It's either there or it's not.
→ Come in with an address, MAR
→ The value is either in the cache
⇒ hit
⇒ if we're doing a fetch, we can read it directly from the cache and return it
→ or not in the cache
⇒ miss
⇒ What do we do?
⇒ We have to get it from the actual memory.
• Pass the MAR to memory
• memory returns the value in the MDR
• It was returned to the cache
• it was passed on to the CPU
- Cache is not only instructions and data. It also needs an address [addr][data/instr] ← cache line
- It is a fraction of the size of primary memory.
- We essentially have a table of addresses and values.
- We want to get as much of the program into the fast memory as possible, because then the program will run faster. But the cache is a finite size. We only have 32 words...
- You want to stuff most of the program in cache if possible. the problem is, if the program exceeds the cache, and two instructions map to the same cache location, that's not good.
- You want as much of the same section of code in the cache as possible.


- We now know we have the cache that consists of addr, and instr/data
- every time afetch/load takes place, the address (MAR) comes in.
- How do we know if its a hit/miss?
→ check all the locations in the cache
→ if the address (mar) equals one of the cache lines, then we have a hit.
- You have to get a miss in order for anything to be put into the ccache in the first place.
- On a hit, return the value, else, read from memory using the bus. Then the cache is updated.

Associative memory:
- equivalent to a for loop
- FOR i = 0 TO CACHE_MAX DO:
→ IF MAR == ____
⇒ MDR = CACHE[i].DATA
⇒ RETURN
→ END
- END
- MISS → READ from MEM

Direct mapping:
- perform a function on the MAR. This then gives us the cache location we want to check.
- f(MAR) → cache loc.
- IF MAR = CACHE[cache loc].ADDR

Assuming we were taking the MAR and using (asssuming we have a 16 word memory) we could take the least significant 4 bits of the MAR as a key.
Cache address ← MAR & 0x0F

Every 16th address maps to the same location.
Every location ending in 0 maps to cache[0]
If you coincidentally happen to be accessing 1000, 1010, 1100... they keep replacing the item in location 0. It's miss after miss.
Direct mapping looks great until you start getting these collisions.

If the data hasn't changed, it doesn't need to be written back to memory. Instructions never get changed.
Data might get changed, or it might not. If there is no change we don't need to write it back to memory.
We keep track of whether a lien has been changed with the dirty bit.
As long as we are reading from a location, the dirty bit does not change.
Once the location is written to, then the dirty bit gets set. That means that if this chache location is to be overwritten, the value needs to be written to memory.

If we're doing a fetch or a load, and we've had a miss, we read the data/instruction from primary memory.
We either have associative or direct mapping.
If we're using direct, we only have one choice of where to put the item. (Say, 2000 is the item. It has to go in position 0) (Say 1000 is already in the cache, then it must be overwritten. Whatever was in cache 0 would get overwritten)
If we're using associative mapping,how would it work? Associative is more flexible. We have all 16 lines of our cache to go through. Say our cache has 1000 in location 0, and its been used many times, but 1016 is in another location and has only been used once, we should replace the one that was barely used.
We can keep a sort of access count of cache locations. We find the one that's been the least recently used, and kick it out.


**Option when you run the emulator: pick which cache mode. Or, compile it twice.


How will we represent the cache in the emulator?

Cache line = an address + instruction/data + dirty bit + age
From this we can build a cache:
cache = 16{cache line}16 (fixed at 16 cache lines)

in our assignment we have a cache size of 32

Age doesn't matter in direct mapping. But you still need the dirty bit.
Age matters in associative, and so does the dirty bit.

What if we're doing a store?
Depending on our cache algorithm, we do the same thing as before:
direct mapping:
- cache addrtess ← MAR & MASK
assoc:
- Loop through and check for a hit.

[cache address] [[addr][I/D]]
if the MAR = address, theres a hit. Update the cache.
cache[cache addres].DATA = MDR
then you would make the “age” of the cache line very young....0
update the dirty bit: set

What if we had a miss?
We know we have a value in the MDR, and a value in the MAR. But there's no room in the cache and we've had a miss.
We should kick out the oldest item... however we have to make a decision. if the dirty bit is set on that item we have to write its value to memory.
cache[cache address].data to MEM
Now we have a free spot.
Cache[cache addres].data ← MDR
cache[cache address].addr ← MAR
set the dirty bit.
age gets the lowest value.

Howver if the dirty bit was 0, we just overwrite the value without sending it to memory.

Index