- SWaP Devices for Edge-Computing(Credit to work done by David Cornett and Clay Leach) Description For most research-based applications of Neural Networks (NNs), the go-to compute device for various network tasks, such as inferencing and training, have been Graphics Processing Units (GPUs). This is because of their relatively high throughput and memory speeds, as well as their ability to do matrix and linear algebra much faster than a traditional CPU. All these factors enable training of NNs to be shortened drastically and the inference speeds of NNs increased to be run at real-time speeds. The problem with traditional “discrete” GPUs is factors related to… Read more: SWaP Devices for Edge-Computing
Skip to content
Skip to main navigation
Report an accessibility issue