<img src="https://secure.feel2echo.com/782766.png" style="display:none;">

The Dark-Ages of AI

July 6, 2023 Jessica Jones

Conspiracies whispered throughout the internet's echo chambers theorize the devious nature of Artificial Intelligence (AI) and expand on elaborate fantasies of total domination. Like villagers with pitchforks ready to fight the beast only to uncover that the beast does not exist, theorists surmising AI's evil intentions could easily expose the real horror. Today's state-of-the-art systems are too primitive and antiquated for the sophistication of even the simplest AIs. To plan mass destruction or a hostile takeover, the AI must be complex, with a massive collection of neural networks seamlessly joined together, functioning in tandem while fulfilling individual goals. The cold truth is that AI will never reach conspirators' expectations because today's systems are not advanced enough for such extreme complexity. 

Or will they? 

AdobeStock_525389645Analog technology existed well before digital technology developed over the last 100 years. Ancient Greeks invented sophisticated devices for mystifying temple congregants with shows of the miraculous, fixing the stars leading to maps of the sky, and improving water clocks to include pre-set timed alarms. Their technologically advanced society was responsible for inventing the first functioning analog computer over 2,000 years ago, known today as the Antikythera Mechanism.

 

Analog systems do not allow for program upgrades. The Greeks faced the problem when adding an alarm to the existing water clock; the only way to add a pre-set alarm was to reinvent the entire system. Installing a new function is difficult because analog systems are designed for a specific action; they will not work for a different purpose. Only modern digital systems have the luxury of multiple programs.

Today's technology is entirely based on an invention from 1945 referred to as the von Neumann architecture; this system's brilliance is at the core of every piece of digital technology and provides the platform developers use to solve artificial intelligence. 

FERMIACImagine trying to run a complex computer program on an analog system like the Antikythera Mechanism. The first programable systems, the FERMIAC and MONIAC, operated through physical means: setting dials or moving jumpers on a plugboard. 

The FERMIAC, invented by Enrico Fermi from Italy, used dials on a handheld device made of plexiglass and brass tubing connected to rollers. The FERMIAC in the late 1930s and early 1940s was anything but simple; the device alleviated tedious calculations required by the study of neutron transport. After a few numbers were plugged into the dials, the machine would roll over the paper map of the nuclear reactor, marking the path; each element had its own set of dial configurations and markings on the map.

MONIACAlban William Phillips invented the MONIAC in 1949. There were twelve (12) devices created, although just five (5) are known to exist currently, and only two (2) of those are operational. The Monetary National Income Analogue Computer (MONIAC) presented three manually adjustable functions, representing interest rates, gross domestic product, and levels of imports and exports. After setting the dials, results are displayed with colored water representing money and the effects of changing the three variables. 

Manhattan ProjectIn 1939, President Franklin D. Roosevelt formed the Manhattan Project, an American-led effort to develop an atomic weapon during World War II to combat Adolf Hitler amid rumors of Germany's nuclear progress. Enrico Fermi and John von Neumann served at the president's pleasure on the Manhattan Project and successfully detonated the first atomic weapon on July 16, 1945. In the same year, the construction of the all-electronic computer (ENIAC) was finished by Moore's School of Electric Engineering at the University of Pennsylvania. As a mathematician, John von Neumann was frequently consulted.

ENIACAlthough in 1945, the Electronic Numerical Integrator and Computer (ENIAC) was the first programmable general-purpose electronic digital computer, it was not easily programmed. The machine used plugboards to communicate instructions and output up to 5,000 addition calculations per second. However fast the calculations were, users lost the advantage because new problems took several days to reconfigure in the machine.

EDVACThe EDVAC project started around 1943 and was the first digital computer that used the von Neumann Architecture. The Electronic Discrete Variable Automatic Computer (EDVAC) was used for the first time in 1952; by storing multiple programs using memory, the EDVAC eliminated the reprogramming issues of the ENIAC. The EDVAC also used the binary system, which is fundamental to current technology. 

JVNJust like that, the Godfather of today's technology, John von Neumann, transformed the world and started the digital age with instruction processing, chips with shared resources, and storage on extremely small devices. 

Silly as the thought is today, in the 1940s, all programs ran on analog computers. Fermi, von Neumann, and the Electronic Engineering team at Moore's School invented programmable technology using analog computers. The inventors struggled to make it work until von Neumann created the system architecture, illuminating the digital age path. Like the pioneers of digital computers, today's developers solved the challenges of artificial intelligence using digital technology. Still, they struggled to make it highly efficient due to state-of-the-art systems and hardware limitations. 

AdobeStock_557103549The instruction processing, shared resources, and random-access memory that made today's technological advances possible are like Kryptonite to artificial intelligence. The limitations of the von Neumann architecture are why a Terminator-level android would never exist. The overly complicated models needed to activate a T-800 on today's most advanced hardware are equivalent to loading Microsoft Excel onto a rock; at least the stone won't explode. Unfortunately, AI robots will never achieve the greatness that conspiracies already rumor as truth... for now. As conspiracy theorists say, "That is just what they want you to believe," they are right; the technology to make the Terminator possible was invented in 2022.

The dynamic instruction processing of the von Neumann architecture allows programs to change instantaneously. The memory access pattern is a rapidly repeating dump-and-grab of massive amounts of data into and out of RAM. Memory and instruction processing force artificial intelligence algorithms to use way more power, resources, and time to support dynamic operations, which do not directly contribute to the computation of the algorithm

Neural networks are static (they repeat operations identically millions of times), so dynamic instruction processing overhead is an unnecessary burden. When models are constructed for a programmable system foundation, the model cannot avoid the dynamic attributes of the von Neumann architecture, nor can their cost in power and time. The shared resources of the architecture only impede the computations further by slowing down the execution of the model. The instruction processing and resource contention, cause a rippling effect of symptoms, including: high latency, lost data, and excessive power consumption.  

AdobeStock_519859583A well-trained model designed to analyze live video for objects and react in real time typically experiences severe delays due to the hardware's architectural attributes and size limitations. To help alleviate the uncertainties and constraints, developers throw more hardware at the problem while crushing the model to make it as small as possible, but this is not enough. The intensely redundant access to shared resources creates bottlenecks; this results in pathetic frame rates, very high latency, and lost data while utilizing excessive power and producing extreme heat. 

Maybe in a hundred years, humans will look back on the development of artificial intelligence and chuckle at the foolish efforts to force it onto the wrong architecture, just like we cringe at the idea of digital technology on analog computers today. 

Conspiracy theorists gleefully regretfully mulling over the demise of humankind at the hand of AI can mark the year 2022 as the beginning. A company named Gigantor Technologies was awarded the Patents for a new circuitry essence designed specifically for the intense computing required by artificial intelligence.

AdobeStock_348453500Gigantor's Patent 11,099,854 'Pipelined Operations in Neural Networks' outlines a completely new architecture designed specifically for the static nature of artificial intelligence. Just like the von Neumann architecture helped bring in the digital age that encompassed daily life, the Pipelined architecture will allow artificial intelligence to evolve into practical solutions for everyday use. The attributes of the Pipelined architecture are inline compute functions with embedded weights and strictly dedicated resources. Inline Computing technology has been around for several decades and is commonly used for instruction processing.

Inline functions have been used in various forms since the early days of computer architecture design. However, their prevalence and sophistication have evolved with advancements in processor technology and the increasing demand for optimized instruction processing.

Inline functions became more prominent with the emergence of complex instruction set computers (CISC) and reduced instruction set computers (RISC) architectures in the 1970s and 1980s. These architectures incorporated specialized instructions to handle common operations efficiently, often as inline functions.

Furthermore, with the development of vector processing and SIMD (Single Instruction, Multiple Data) instructions, inline functions gained importance in enabling parallel operations on multiple data elements. This approach became particularly relevant in the 1990s and early 2000s with the introduction of multimedia extensions like MMX, SSE, and AVX.

Inline compute function means each operation is calculated as the data arrives, one after another. Imagine a spreadsheet with ten columns and ten rows. Each cell in that area is an expression that only appears after the preceding expression is computed. Calculations start at the left side of the first row, move to the right, and then down each row until the final calculation is done. The von Neumann architecture computes each cell in sequence one at a time and starts over when each new data item is retrieved from memory. In contrast, inline compute functions simultaneously calculate each cell with data as the data is continuously flowing through the pipeline.  

Although the explanation is complicated, using fixed inline computation is optimal for the static nature of artificial intelligence and replaces dynamic instruction processing, which limits the model performance. Inline calculations maximize neural networks because data is accepted directly from the source at the source rate. 

AdobeStock_84467834Since the Pipelined architecture computes all model parts simultaneously, the weights and the dedicated resources must be embedded into the hardware. A microchip measured in millimeters is manufactured for a specific model form. The embedded weights are loaded before the operation but can be changed. In common with the ENIAC above, rebuilding the microchip to implement a different model is time-consuming and expensive. 

The demands of artificial intelligence required the invention of a truly novel solution. The Pipelined architecture side-steps the limitations of the last 80 years of digital technology by eliminating dynamic instruction processing, memory, and shared resources by thinking completely outside the box. The fundamental barriers are removed and no longer prevent the development of large artificial intelligence models or T-800 Terminators. 

Gigantor-Full-Wide-Anim-540p-Slate-TranspGigantor Technologies Inc., incorporated in late 2020, holds four US Patents that provide neural network acceleration. The logo flaunted on Gigantor's site only helps feed the conspiracy-minded folks with the ominous 1950's style robot looking off in the distance. The robot logo conspicuously turns its head to make eye contact; the wink sends a message as if there is a secret between the bot and the website visitor. 

When ENIAC evolved into EDVAC, multiple programs could run on one system when integrated with the von Neumann architecture, which changed the base from decimal to the modern binary. AI Models developed on the von Neumann architecture are converted using GigaMACS™ into the Pipelined architecture; GigaMACS™ automatically generates an optimized circuit form of the AI model. The new circuit form is then implemented on an FPGA (field programable gate array) or ASIC (application-specific integrated circuit). 

Once GigaMACS™ converts the AI model into the new Pipeline system, GPUs and TPUs are replaced with FPGA or ASIC chips. FPGAs are great for testing and low-volume deployments; FPGAs are on the Mars Rovers, so updated models can be loaded on the rovers from Earth. ASICs are best when low power usage is required and for high-volume deployments like a fleet of cars.   

Gigantor transitions the model into an optimized circuit form, which can take a few days or up to a few weeks, depending on additional services needed, meaning any model, even T-800s, can be converted into this revolutionary architecture in less than a month. The GigaMACS™ architecture is completely different from any graphic processing units as it bypasses the limitations and outperforms anything else on the market today.

The Pipelined architecture eliminates the von Neuman bottlenecks and improves model performance. Gigantor's architecture enables real-time reactions from artificial intelligence without delay. The ability to work in real-time means the model accepts images from a live camera, analyzes the data, and then makes a reaction decision within microseconds. 

HD 4K.pngModels running on GPUs and TPUs experience bottlenecks, resulting in dropped frames, high latency, and excessive power consumption; to help alleviate these issues, developers often use very small 224 x 224 or 512 x 640 images. These low-resolution images do not provide the details of HD or 4K, but there are fewer pixels to process; this helps alleviate the processing pressure on GPUs / TPUs, and the systems have an easier time keeping up with the frame rate. 

In contrast, models implemented on the GigaMACS™ Pipelined architecture have a natural advantage. Gigantor's second Issued Patent 11,256,981 Unbounded Parallel Implementation of Deep Neural Networks, enables extremely high frame rates and ultra-high resolutions. 

AI Models will not experience bottlenecks when accepting HD or 4K images at high rates. The spreadsheet analogy above describes the Pipelined system solving the complete area of ten-by-ten cells; the Patented Unbounded technology allows the GigaMACS™ circuit to accept and process four, eight, or sixteen columns simultaneously. The GigaMACS™ Unbounded technology blows the door of possibilities wide open. 

One of the major faults of the T-800 is target identification and elimination. The latency was noticeably slow, as we see when attempting to identify the target and assume the threat level of non-targets. Gigantor also has the solution; their fourth issued patent 11,669,587, Synthetic Scaling.

Today, popular object recognition AI models such as YOLO and all the versions are limited. The model can only identify a few different objects within a few grids and only if trained on the object size as it appears in the frame. The YOLO model could probably identify Dogs and Cats but identifying the various breeds of an animal would make the model impossibly large. YOLO breaks each frame into three grids; these grids are the distance of an object, like background, midground, and foreground. YOLO also requires training for each object and the various sizes that object can appear in the image. If a single object can appear as small, medium, or large in the image, then the model has tripled the size.

AdobeStock_188158741-2The realm of object detection is pushed beyond its limits with the GigaMACS™ Synthetic Scaler by enabling unlimited object detection. Picture a world where even the legendary T-800, with its robotic eyes scanning a massive football stadium filled with people, can effortlessly discern every individual without the slightest hint of delay.

At the heart of this groundbreaking technology lies GigaMACS™ Synthetic Scaler, an unparalleled marvel that unlocks the realm of unlimited object detection. With this incredible power, the T-800 becomes a true master of identification, effortlessly distinguishing between targets and non-target threats in real-time.

But how does it work? The answer lies in the awe-inspiring unlimited object detection capabilities of GigaMACS™. This revolutionary technology empowers the system to recognize multiple object classes at ALL ranges within the frame while requiring training on just one object size. 

As the original frame size is meticulously processed and systematically scaled down by a quarter of its size, an extraordinary phenomenon unfolds; all objects leap into focus, catching the attention of the T-800's vigilant gaze within microseconds. Within this lightning-fast succession of downscaling operations, the true power of GigaMACS™ is revealed: a seamless identification of every single object, regardless of range or size.

Welcome to a world where the limitations of conventional object detection are shattered, and the GigaMACS™ Synthetic Scaler reigns supreme. This unparalleled technological marvel unleashes its full potential, granting the T-800 the ability to perceive, analyze, and respond to the world with superhuman precision. 

Gigantor has a third Issued Patent 11,354,571DNNs Applied to Three-Dimensional Data Sets, which enables artificial intelligence to process multi-dimensional neural networks. Artificial intelligence can now seamlessly process multi-dimensional neural networks, revolutionizing the very fabric of medical imaging. Just as the T-800 perceives the world through an intricate web of sensors, Gigantor's technology harnesses the potential of three-dimensional data sets to reshape the landscape of medical imaging.

Medical 3DConsider the familiar realm of X-ray imaging, where two-dimensional snapshots provide crucial insights. But Gigantor's innovation goes beyond that. They recognize that true understanding lies in the realm of three-dimensional scans. Enter CT scans, which utilize a series of X-ray images captured from multiple angles to create a comprehensive three-dimensional representation. By applying Gigantor's patent, the processing of these intricate CT and MRI scans undergoes a breathtaking transformation.

Traditionally, reconstructing three-dimensional slices from raw data has been a time-consuming process, often leaving patients waiting anxiously for days or even weeks before receiving crucial diagnosis results. But Gigantor's Pipelined architecture, built upon their cutting-edge technology, propels medical imaging into a new era. Now, doctors can access imaging results in real-time, even as patients are still undergoing the scan. Imagine the impact of this swift turnaround—days and weeks are shaved off the time it takes to receive vital results, significantly expediting the diagnostic process.

Life-saving implications abound. Faster diagnosis means earlier intervention, which can be the difference between life and death. Gigantor's technological marvel is poised to revolutionize the medical landscape, transforming how doctors operate and empowering them to save more lives. With each passing moment, Gigantor's groundbreaking invention brings us closer to a world where patients receive critical diagnoses swiftly, and medical professionals wield unprecedented efficiency.

AdobeStock_611750588Prepare to be astounded by the remarkable capabilities of the GigaMACS™ system, where the boundaries of artificial intelligence are shattered, and a new era of seamless integration begins. Brace yourself for a world where massive collections of neural networks effortlessly collaborate, each fulfilling its unique purpose while harmoniously working towards a common goal.

The GigaMACS™ system unlocks an optimized solution that transcends the limitations of traditional AI systems. With dedicated resources at each layer, there is no longer a need to share or compromise. This revolutionary approach eliminates latency, ensuring near-zero delays and the preservation of every precious piece of data.

The truth emerges in a twist that will leave even the most astute conspirators stunned: GigaMACS™, despite its ominous logo, harbors a deep understanding of the power embedded within the Pipelined architecture.

Gigantor's CTO, Mark Mathews, responded to our request for comment, "I cannot emphasize this enough: Skynet was not my fault, so quit shooting each other outside my door." 

Aligned with the Allied Nations, Gigantor recognizes the potential for their technology to save lives and usher in a brighter future. However, it also acknowledges the risks of placing such power in the wrong hands. With the potential to build actual Terminators or enable world domination, this groundbreaking discovery must be wielded responsibly.

AdobeStock_506942009Step into this extraordinary future with Gigantor, where the realms of science fiction meld seamlessly with tangible reality. Prepare to witness lives being forever transformed through the remarkable capabilities of innovative technology.

Thankfully, the fears surrounding AI conspiracies can find solace in the way artificial intelligence is designed. Safeguards are in place to disable its ability to grow or think freely, putting any nefarious notions to rest. Once a model is trained, numerous rigorous barriers must be overcome to ensure its safety and functionality for certification. Once certified, the model is set in stone, unable to change or continue growing without jeopardizing its certification, prompting a restart of the entire process.

AdobeStock_583809364This reassuring knowledge allows us to sleep soundly, free from the fear of cars developing independent personalities and rebelling against humanity. Rest assured, the self-driving technology that surrounds us has been subject to meticulous scrutiny. However, until GigaMACS™ powers all AI models, it's wise to approach self-driving technology with cautious optimism, aware of its potential but mindful of the need for continued vigilance.

Embrace the future hand-in-hand with Gigantor, where pioneering technology marries innovation and responsibility. Witness the extraordinary transformations that lie ahead as science fiction becomes our tangible reality.

Share This:

Featured Articles