“GigaMACS” stands for “Giga Multiply and Accumulate”, plural. GigaMACS is our flagship product, commercial ready. What is it? GigaMACS takes your TensorFlow or other CNN (convoluted neural network) model as-is, and uses our patent-pending technology to compile a hyper-optimized bitstream for you to use in your FPGA, or for your custom ASIC. GigaMACS will automatically optimize your model to have near zero latency, require no buffering, and enable your model to handle full camera 4K or HD at input speed, even real-time. If you have a camera at 80 fps or even faster, we’ll make sure your model is working at the same rate. GigaMACS can easily move models to 240 fps in full 4K.
Are you testing your model at 224x224x3? While nVidia will perform up to 75 fps with these tiny images, GigaMACS will exceed 3,000 fps. Why not go to full-frame HD or 4K with GigaMACS, and be real-time? While a test with the Tesla nVidia V100 on AWS with SqueezeNet and can reach 25 fps, the GigaMACS automatically optimized SqueezeNet on an FPGA hardware test reached the full 80 fps. The nVidia hardware clock runs at 10x the speed of the FPGA, and GigaMACS still outperforms it by a magnitude. Also nVidia has a latency of nearly 60 milliseconds, whereas GigaMACS had 1.6 milliseconds.
Adding more memory and faster GPUs or TPUs is not the answer and ultimately leads to a dead-end. GigaMACS will remove redundancies and looping while automatically identifying and implementing optimizations, and manage to make your model perform in real-time. You AI/ML model will perform identically as it does today, just much faster. MUCH MUCH FASTER!
Gigantor has one patent filed for GigaMACS, and two more in progress. Gigantor will literally transform your machine learning model into a literal pipeline that performs as fast as the input.
Contact us to schedule your demo