HOME                ABOUT                TEAM                PRODUCTS                CONTACT


 

 

GigaMACS

“GigaMACS” stands for “Giga Multiply and Accumulate”, plural. GigaMACS is our flagship product, commercial ready. What is it? GigaMACS takes your TensorFlow or other CNN (convolutional neural network) model as-is, and uses our patent-pending technology to compile a hyper-optimized bitstream for you to use in your FPGA, or for your custom ASIC. GigaMACS will automatically accelerate your model to have near zero latency, require no buffering, and enable your model to handle full camera 4K or HD at input speed, even real-time. If you have a camera at 80 fps or even faster, we’ll make sure your model is working at the same rate. GigaMACS can easily move models to 240 fps in full 4K.

Why does everyone benchmark their model with tiny 224x224x3 images? Because they can’t do full-frame high resolution without dropping the frames trying to keep up. While nVidia will perform up to 75 fps with these tiny images, GigaMACS will exceed 3,000 fps. Why not go to full-frame HD or 4K with GigaMACS, and be real-time? While a test with the Tesla nVidia V100 on AWS with SqueezeNet and can reach 25 fps with a full 1920x1080x3 frame, the GigaMACS automatically optimized SqueezeNet on an FPGA hardware test reached the full 80 fps. The nVidia hardware clock runs at 10x the speed of the FPGA,  and GigaMACS still outperforms it by a magnitude. Also nVidia has a latency of nearly 60 milliseconds, whereas GigaMACS had 1.6 milliseconds. GigaMACS can perform as fast as the input.

Adding more memory and faster GPUs or TPUs is not the answer for true acceleration of your NN model and ultimately leads to a dead-end with miniscule gains. GigaMACS will remove redundancies and looping while automatically identifying and implementing optimizations, and manage to make your model perform in real-time. Your AI/ML model will perform identically as it does today, just much faster. MUCH MUCH FASTER!

Gigantor will literally transform your machine learning model into a literal pipeline that performs as fast as the input.

Contact us to schedule your demo