2023/07/12 - 15:11
dennis kevin matthew
- 3 body problem
- dark forest

- adds a margin of error into compution
- removing some precision in favor of adding a little error
- alternative hardware and software
→ 16 bit system rather than 32 bits, less precision is required for approx computing
→ software is required because a developer needs to leverage the hardware supports
- why?
→ more energy efficient. makeup requires diff components which drain less energy. lower precision means they can use lower intensity
→ fast calculations. the way calcs are done, they no longer need to be precise so there can be leeway. errors can be considered “close enough”
→ more cost effective, components are cheaper, 16 bits systems would be cheaper to manufacture
→ since you don't need exact output you can get away with cheaper systems
→ Future benefits: tech is up and coming
⇒ chatGPT. ai engines use approx cimputing for ML algorithms, make training modules faster and can use larger datasets.

hardware
- we normally think correct/incorrect... but approx computing says a result can be “close enough” and we will say its correct
- approx adders and approx multipliers. interject approximations
→ lower precision, less power, can be faster, less memory (fewer bits stored), takes less physical space
→ adders
⇒ multi bit
• traditional: carry look ahead adder
◇ can turn up and dwon accuracy, introduces complexity into the compiler
• error tolerant adder, dithering adder
◇ dithering adder alternates between upper andl ower bounds to lower the error variance
⇒ full adders
• accurate XNOR has 10 transistors, approx version is simpler
→ multipliers
⇒ reduced precision of the circuits to increase speed of calculation and lower power comsumption
- need to be able to measure all this stuff

performance metrics
- application level accuracy
→ accuracy = how close the actual is to the approximate
→ sometimes exact calcs arent required
→ we can get a good enough result, which outweighs the negatives of an accurate
→ what is good enough?
→ image processing: steaming, netflix, we can compare the approx computed image to a reference image created by an accurate system (traditional) and we can use structural similarity index.... meausred how much an image has degraded (using saturation luminosity)...
→ error tolerance: some things are more tolerant to errors than others
→ e.g. video steaming,.. if we use aprrox to generate image, the flaws are not noticeable to the human eye.
⇒ financial calculations or scitific things require a lot of accuracy so it wouldnt really work
→ energy efficiency: with the approx calculations they take less energy
→ its beneficial to researchers to measure the pwoer consumption
⇒ you can do this physically, something directly on teh architecture to meausre power consumption
⇒ or we can simulate it and collect data from the simulated components
→ we compare to a triditional architecture, we can do two benchmarks and compare them
→ speed/latency
⇒ meausres efficiency of a system. speed refers to time takes and latency is for a set of operations to complete
⇒ reduction of accuracy may be worth it if it improves speed or latency
⇒ measured through throuput, measure of number of tasks that can be completed in fixed time
⇒ compare throughput of two systems and analyze which is faster, traditional or approx.
→ reliability robustness
⇒ we want the system to be reliable enough that when we give it the same input we should get the same output to a reasonable degree
⇒ we want a low error margin. we can say a system is reliable if theres little variance between the same input
⇒ robustness: metric that determines the how it operates under different conditions. if we give it different conditions and it has low variance then we can say it's robust, but if the different conditions give unexpected results then its not robust
→ quality/efficiency trade off
⇒ we want both quality to be present and efficiency to be present
⇒ quality = accuracy of computing
⇒ efficiency is how fast the system is executing instructions and stuff
⇒ this tradeoff means we want to balance the two
⇒ e.g. video streaming, sometimes if we tradeoff the quality its not noticeable, but if we do scientific computation, the tradeoff of quality is more noticeable and efficiency should be spared
⇒ reseachers plot this on a curve to get the best value
⇒ they find the optimal point on the curve for the specific application

When is it appropriate to use it IRL?
- machine learning
→ image processing models
⇒ security cams with facial recognition
⇒ dont need to train it on 4k images for it to get it close enough
→ NLP models
⇒ teams.. auto caption features. approx computing improves speed at which captions are generated
⇒ speed is arguably more important than accuracy in captioning because desync is bad
→ mobile devices
⇒ small imperfections in video quality are much less visible on small screens
→ big data analytics
⇒ have to be careful using approx computing on big data
⇒ you need a huge dataset
⇒ approx computing on a large enough dataset can mimic exact results
→ IOT
⇒ smart homes - tend to sacrifice functionality for form. has to fit in but has smart functionality. e,g, switch connecting to tv, doesnt need an exact location for the TV just a general location

summary
- getting as close to the result as we can while leaving details behind, saves time space power
- approx adders and multipliers...
- difficulties are lower accuracy which limit appilcations
- can increase load on the compiler because it needs to be told which applications need accurate computing ad which dont
- we want to introduce some error but it needs to be known to the system what the level of error is, which increases compiler load
- performance metrics make sure the benefits actually outwigh costs
- most important applications are ML, mutimedia processing, IOT, big data analytics


if you tihnk of lossy compression, is it implicitly approximate?
in what way are they going about factoring in the loss?
human ability to recognize colour and sound is not uniformly distributed, so you can lose data in certain areas of the specturm and it will be less noticeable to the human eye
the game you play is “because we dont see say, blue, as well as red, then we can use less resolution in representing that part of the spectrum"


- Why was the particular approach to computation adopted in preference to a traditional computer architecture?

-



- What are the architectural innovations supporting the proposed approach to computation?

-



- What advantages and disadvantages result from adopting the proposed approach to computation?

-



- Given the unique properties of the proposed computing platform, how did the authors go about measuring performance?

-



- What are the ‘killer applications' for the proposed approach to computation


-

Index