As IoT and 5G technologies thrive, increasing devices collect and generate colossal amounts of data on a millisecond basis. Sending inundated data back and forth to the cloud and center for storage and processing is relatively inefficient and expensive. Also, there is not enough bandwidth when increasing machines are becoming interconnected. With distributed AI deployment, mundane AI tasks such as data collating and basic processing are allocated to edge devices with moderate memory and computing power like industrial endpoints and local 5G base stations. These edge appliances are imbued with enough intelligence to ensure only relevant and valuable data is sent to central AI for crunching. Allowing it to learn faster, infer deeper, and achieve more with lower costs, less bandwidth, and power consumption. One trending application case is IVA (edge-based Intelligent Video Analytics) solutions for traffic management. IVA helps to ease traffic congestion by monitoring the road conditions in real-time and taking timely action locally, without dependency on time-consuming video transmission and analytics from the cloud. It also reduces the cost for massive data transmission and intense cloud services from the broad and geographical deployment of monitors, signals, and devices.
Memory in Distributed AI
Continuous data input from edge endpoints has incrementally trained and optimized the neuro networks and central ML models. The retrained central AI then feedback refined algorithms to the edge for better predecessor results. It's a continuous cycle of improvement.