GigaMACS™ Patented Technology

GigaMACS™ is patented technology proven to accelerate artificial intelligence for real-time operations. With all the claims allowed in under two months after filing on April 15, 2021, Patent # 11,099,854 was officially issued on August 24, 2021.

     
GigaMACS™ accepts high-definition input pixels directly from the source & produces outputs with latency less than a millisecond.   GigaMACS™ is not dependent on RAM-to-RAM computations, so resource bottlenecks are eliminated.   Most cameras return 60 FPS; however, more powerful cameras can produce 1,000 FPS. GigaMACS™ will process input pixels as fast as the camera can deliver them.   Adding complexity to a large model is not a challenge. GigaMACS™ will run the inference without changing the model or slowing down as model size increases.

 

 

We compared GigaMACS™ to common hardware on the market

Inference models running on standard market hardware struggle to process high-definition frames, so models typically use low resolution images for training and execution. When the same weights are applied across a high resolution frame, that hardware cannot keep up with the camera speed and consequently must drop frames. 

The test data describes the Frames per Second and the Latency for two image sizes: a lower resolution of 224x224x3 and a high-definition image of 1920x1080x3. The models used for testing are identical across all hardware and clearly demonstrates how much faster GigaMACS™ can process real-time video streams.   

However, please don’t take our word for it; look over the data and view the test results.

 

 

 

Understanding the Demo Videos

The videos show a simple MNIST inference model to identify numbers. The model is designed to identify moving objects just like a real-world scenario. Gigantor created this simple model for the purpose of demonstrating GigaMACS™ on Xilinx VU9P FPGA against other hardware on the market. But of course, GigaMACS performs at the same lightning speeds on ASICs or FPGAs from any vendor or source.

Unfortunately, nVidia’s hardware struggles to keep up. Even though the model is small, nVidia hardware drops frames. The GigaMACS™ demos show the inference smoothly processing 240 FPS almost instantly without losing any data. 

 

nVidia A100 vs. GigaMACS™

GigaMACS™ processes high-definition images at 240 FPS within a fraction of a millisecond compared to nVidia’s A100, which is processing 28 FPS with a 41-millisecond latency. 

The A100 is dropping 212 frames per second which means 88% of the data is lost. 

Watch nVidia A100 with HD Frames

 

 

 

Tesla V100 vs. GigaMACS™

nVidia’s Tesla V100, is processing HD images at 22 FPS with a 45-millisecond latency and LD micro-images at 44 FPS with 23.4-millisecond latency.

The V100 is dropping 218 FPS which means 90% of the data is lost. 

Watch nVidia Tesla V100 with HD Frames

 

 

 

nVidia GTX 970 vs. GigaMACS™

nVidia’s GTX 970, is processing HD images at 23 FPS with a 43-millisecond latency and LD micro-images at 75 FPS with 13.7-millisecond latency.

The GTX is dropping 217 FPS which means 90% of the data is lost. 

Watch nVidia GTX 970 with HD Frames

 

 

 

 

 

Clock Speeds

 

GigaMACS™ hardware runs significantly slower and cooler while generating higher FPS with near-zero latency. All the nVidia hardware boards are running over a billion cycles to achieve fewer frames per second. GigaMACS™ runs at a sedate 125 MHz, yet still outperforms GPUs running over 1 GHz using 10 times the power and heat. 

 

 

GigaMACS™ Beats the Competition

Stop sacrificing model size, speed, and performance with standard hardware. Design larger and more complex models without losing data, lowering image resolution, or slowing down by using GigaMACS™.

GigaMACS™ accepts high-definition images as fast as the camera can deliver the frames. No changes are made to the model, so it operates identically to the original but much faster.

GigaMACS™ is the solution to artificial intelligence acceleration and superior model performance.