Floating-Point Adders
- Mohamed Abdelgawad
- Feb 1
- 1 min read
"A bunch of tensors getting multiplied and added": this is what 70% of AI workloads boil down to. Nothing fancy. My passion for arithmetic hardware in AI led me to focus on the two main building blocks: adders and multipliers. Adders, however, consume 4 to 30 times less energy than multipliers. Interestingly, since 2019, a growing body of research has explored designing neural networks that rely solely on addition, eliminating multipliers entirely. In these notes, I'll walk you through the fundamentals of adding two floating-point numbers in hardware! pretty trivial, eh? Not really! Let’s do some bit-smashing together and see what is happening under the hood of addition in hardware!
Download the notes here
Download the design here
![Floating Point Adder/Subtractor [Design by the author]](https://static.wixstatic.com/media/58b0d3_64761d885a01461f852e19e9e9e086e1~mv2.png/v1/fill/w_980,h_1121,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/58b0d3_64761d885a01461f852e19e9e9e086e1~mv2.png)
Feel free to reach out if you have questions or suggestions!


Comments