IBM Research Tech makes edge AI applications scalable

technology

[ad_1]

Edge computing technology with distributed network, data storage near the user rather than in the cloud, internet service for IoT, games and AI recognition, concept

Nicolino

One of the most fascinating topics driving evolution in the world of technology is edge computing. After all, how could you not be excited by the promising concept of bringing distributed data across multiple connected computing resources. Working together to achieve a common goal?

Early iterations of a real-world problem prove that edge computing is more interesting in theory than in practice. Trying to distribute computing tasks across multiple locations and bring those disparate efforts into a coherent and meaningful whole is more difficult than it first appears. This is especially true when trying to scale small proof-of-concept (POC) projects into full-scale production.

Issues such as the need to move massive amounts of data from place to place – ironically, edge computing should have been unnecessary – as well as the overwhelming demands of computing to name just two of the many factors that have conspired to make data successful, except for the contrary.

IBMNYSE:IBM) research team has been working with IBM Sustainability Software and IBM Consulting to help address some of these challenges. More recently, the group has begun to see success in industrial areas such as automobile manufacturing by taking a different approach to the problem. In particular, the company has been rethinking how data is analyzed at various edge locations and how AI models are shared with other websites.

In car manufacturing plants, for example, most companies have started using AI-powered visual inspection models that find manufacturing defects difficult or too expensive for humans to detect. Proper use of tools like IBM’s Maximo Applications Suite’s Visual Inspection Solution with Zero D (Defects or Downtime) can help both automakers save significant amounts of money by eliminating defects and keep production lines running as quickly as possible. Given the supply chain-specific constraints that many auto companies have recently faced, that point has become especially critical recently.

The real trick, however, is getting to the zero-D aspect of the solution, because inconsistent results based on misinterpreted data can have the exact opposite effect, especially if that incorrect data ends up being propagated across multiple manufacturing sites in inaccurate AI models. To avoid costly and unnecessary production line shutdowns, it’s important to ensure that only the right data is being used to create AI models, and that the models themselves are regularly validated to avoid data mislabeling errors. can create.

This “reconfiguration” of AI models is the secret sauce that IBM research brings to manufacturers, and especially to a major US automotive OEM. IBM is working with something called out-of-distribution (OOD), which means that the data used to test the visual models is outside the acceptable range, and therefore can cause the model to produce inaccurate information on the input data. More importantly, it does this work to eliminate the stagnation that can result from time-consuming human tagging efforts and to allow the work to be scaled across multiple manufacturing sites.

A result of OOO detection, called data summarization, is the ability to select data to manually inspect, label, and update the model. In fact, IBM is working on reducing the amount of data traffic by 10x-100x. Additionally, this approach results in 10x better utilization of man-hours spent on manual inspection and labeling by eliminating large amounts of data (nearly identical images). By combining it with modern techniques like OFA (Once for All) model architecture research, the company hopes to reduce the size of the models by about 100x. This enables more efficient edge computing deployments. Additionally, combined with automation technologies designed to easily and accurately deploy these models and data sets, this enables companies to create AI-powered edge solutions that can successfully scale from small POCs to full product deployments.

Efforts such as those seen at a major U.S. automotive equipment manufacturer are a critical step toward the viability of these markets, such as manufacturing. However, IBM sees the possibility of applying these concepts of refining AI models to many other industries as well, including telcos, retail, industrial automation and autonomous driving. The trick is to create solutions that work around the inevitable differences in edge computing and use the unique value that each edge computing site can produce on its own.

As edge computing evolves, it’s clear that it’s not just about collecting and analyzing as much data as possible, it’s about getting the right data and using it as wisely as possible.

Disclaimer: Some of the author’s clients are vendors in the tech industry.

Disclosure: nothing else.

Source: Author

Editor’s Note: Bullets for this article’s summary were selected by Search Alpha editors.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *