The Joint Edge Computing Platform lets enterprises train AI models at the edge of a network to support real-time decision-making
Alibaba Cloud and Intel have developed an internet-of-things computing platform designed to make it easier for enterprises to perform compute intensive tasks such as training artificial intelligence and machine learning models at the edge of a network.
Dubbed the Joint Edge Computing Platform, the system provides an open architecture for IoT applications that integrate AI and cloud technologies for edge computing while catering to needs of different industries.
At Alibaba Cloud’s annual technology conference in Hangzhou last week, Intel and Alibaba executives told a roomful of developers and business leaders that at the heart of the platform is Alibaba’s Link IoT Edge server.
Besides applying and training AI models at the edge of a network using data collected from IoT devices, the server also connects to Alibaba Cloud to crunch heavier workloads.
As part of its collaboration with Alibaba Cloud, Intel is providing processors, silicon acceleration technologies and software optimisation capabilities to deliver the computing capacity required at the edge, along with the OpenVino computer vision development toolkit.
Tim Sheedy, principal advisor of Ecosystm, a technology research and advisory firm, said with the edge computing platform, “Alibaba and Intel have taken an important step in putting computing power and AI software tools in and near edge IoT devices to ensure time-critical decisions happen at the edge – without the need to continually go back to the cloud or back to the core”.
An early adopter of the new platform is Yumei, an alloy die-casting specialist in Chongqing, China. The company used the platform’s computer vision capabilities to identify defects in real-time instead of waiting until the end the manufacturing line, improving its defect detection rate by five times.
Although other cloud suppliers have dipped their toes into IoT and edge computing, they have been doing so with offerings that perform AI inferencing based on AI and ML models that have been trained in the cloud in a bid to reduce bandwidth costs.
In July 2018, Google announced its Edge tensor processing unit designed to run TensorFlow Lite machine learning models on mobile and embedded devices, while Amazon Web Services has its Snowball Edge device that lets enterprises move data from the edge to AWS, as well as perform edge computing tasks using the AWS Greengrass ML Inference capability.
“In manufacturing, public safety, transport, and other time-critical industries, IoT devices running ML inference are often not enough – as algorithms need to continuously learn and adapt – for making new decisions with new ML models on the fly,” said Sheedy.
“Using the current generation of ML inference enabled IoT devices is a good start, but the lack of adaption means their algorithms can go out of date in a hurry. Allowing the ML algorithms deployed on IoT devices to continue to learn and adapt adds significant new opportunities and applications for AI – and makes AI more meaningful and accurate for businesses that embrace this joint capability between Alibaba and Intel,” he added.
According to Ecosystm, global IoT spending is predicted to grow at a compound annual growth rate of 6.9% from 2017 to 2022, reaching a value of $367bn
The findings, based on Ecosystm’s semi-annual IoT global forecast, suggest the Asia-Pacific region will become the global centre for IoT, growing at a CAGR of 7.4% to account for almost half (48%) of worldwide spend at $177bn by 2022.
Previously, edge devices were considered low value elements of any IoT market forecast. However, Ecosystm noted that the rise of IoT edge requirements, coupled with AI and ML capabilities, is driving new, richer hardware configurations. As a result, the study suggests that hardware will grow at a CAGR of 9.2% to $115bn by 2022.
Date: September 26, 2018