Algorithms, data and models are not an end in themselves, because they should ultimately result in increased efficiency in our own production or new products or business models. However, we caution against rushing to develop new ML-based products without increasing OEE to at least 90 per cent in-house. The short excerpt presents three applications.
Image source: Pixabay
The talking lathe
Voice controls have found their way into private households thanks to tech companies from the USA. Most applications are based on the research of Professor Sepp Hochreiter and Professor Jürgen Schmidhuber. But some developers from the consumer industry are now drawn to the industry.
A prominent example of this is Omnibot from Oldenburg. Jeff Adams, a former leader of the team that developed Amazon Alexa’s speech technology, joined the conversational AI platform Omnibot in 2018 as co-founder and principal scientist. The press release said at the time: Together with the team at his research and development company Cobalt Speech and Language, he would bring leading technology knowledge and the speech technologies developed by Cobalt. As a result, Omnibot is able to offer a speech and conversational AI platform with entirely in-house technology – as the first company of its kind in Europe, the Oldenburg-based company confidently explained.
And industry is discovering the voice solution from Omnibot for itself – for example in maintenance. The user puts a question to the system and it answers which part is defective, goes through the maintenance process with the human and supports him with image displays. The challenge: The system must also be able to understand dialects. This means that the bot (computer program that works automatically) must also be trained with dialect data. “Our unique selling point is our knowledge. We have over 25 years of experience with language,” explains Jascha Stein, CEO of Omnibot, in the podcast interview. In addition, the North Germans want to make it as easy as possible for users to implement voice control. Even non-programmers could create a complex bot via a graphical interface, assure the developers. “And the industrial user can communicate bidirectionally with the machine and call up sensor data,” explains Jascha Stein. The “talking lathe” demanded by Sepp Hochreiter arrives. In most cases, however, industrial companies that use the technology want the voice platform to be separate from the Internet. Voice assistant ecosystems that process their data in the local data center are gaining importance in the industry. According to Omnibot, this is what it can deliver. “Trustworthy AI is our product,” summarizes Jascha Stein. Secure language setups for industry – also abroad. In most cases, however, industrial companies that use the technology want the voice platform to be separate from the Internet. Voice assistant ecosystems that process their data in the local data center are gaining importance in the industry. According to Omnibot, this is what it can deliver. “Trustworthy AI is our product,” summarizes Jascha Stein. Secure language setups for industry – also abroad. In most cases, however, industrial companies that use the technology want the voice platform to be separate from the Internet. Voice assistant ecosystems that process their data in the local data center are gaining importance in the industry. According to Omnibot, this is what it can deliver. “Trustworthy AI is our product,” summarizes Jascha Stein. Secure language setups for industry – also abroad.
The employees at Workheld also define the maintenance of machines and systems as a target market. The Austrians have a language assistant for the maintenance of plants and machines. “The maintenance technician talks to the machine,” explains Benjamin Schwärzler, who studied production management in Vienna, in the podcast discussion at the Hanover Fair. The intelligence of the Workheld solution is in an inconspicuous tablet. The technician comes to the system, the system recognizes the tablet and the conversation starts. “It could sound like this, for example: ‘In system no. 5 there are problems with the spindle in the Y-axis.’ The system then searches through which faults there were – and perhaps answers: ‘Two years ago there was the same problem’, suggests solutions and also says, who fixed the problem at the time. In this way, you can then immediately turn to the right colleague who is already familiar with the problem.”
Or: The machine reports current problems with the pump and the software immediately offers the technician expansion plans or searches the database for the experiences of other colleagues. The repair orders run into an IoT system, the basis for the talking machine. “We are not only problem solvers, but also interactive knowledge management,” emphasizes Benjamin Schwärzler, who founded the company four years ago. The system also saves the communication with the technician. “The system remembers customer and project names, assigns information and also constantly expands its understanding of the language,” adds Benjamin Schwärzler. The idea for the “talking machine” came to him and his team from their first product: a classic maintenance tablet with blueprints and a knowledge database. “We then observed our users closely and quickly realized that the technicians on site were reluctant to write test reports or documentation,” Schwärzler looks back. Expenses were also rarely entered correctly. “There has to be an easier way.” “Speech to Text” was the solution and at the same time a difficult task. Today, the user can dictate his test reports to the system and report special features directly by voice. Each spoken documentation enriches the solution in terms of content and other employees or new colleagues benefit from it. “Speech to Text” was the solution and at the same time a difficult task. Today, the user can dictate his test reports to the system and report special features directly by voice. Each spoken documentation enriches the solution in terms of content and other employees or new colleagues benefit from it. “Speech to Text” was the solution and at the same time a difficult task. Today, the user can dictate his test reports to the system and report special features directly by voice. Each spoken documentation enriches the solution in terms of content and other employees or new colleagues benefit from it.
On the one hand, the young entrepreneur’s technology is based on well-known language assistants such as Alexa, Siri and Co. But the greatest challenge lies in the development of a framework for “intent recognition”. In plain English: The machine, app, tablet or bot must understand what the user, technician or maintenance worker wants, must recognize the language and convert it into text and, if necessary, react to it. “We develop the frameworks for the machines with our customers on site and use different NLP (“Natural Language Processing”) technologies for this,” explains the Vorarlberg native. NLP describes technologies based on ML that enable the development of natural language understanding features in apps, bots and IoT devices.
The breakthrough came with understanding the language. Workheld costs 39 euros per user per month – also with SAP connection, if desired. A German carmaker is already using the technology with the Viennese startup. The main competitors are augmented reality providers. The advantages of a voice solution: “We don’t need a helmet, glasses, or large batteries, and our solution doesn’t tire our eyes either, and we still have our hands free to work with it,” summarizes the founder. And the noise in the factory, do technicians and machines understand each other there? “We also work with headsets in harsh environments. We have had good experiences with this,” reports Benjamin Schwärzler.
AI in 3D printing
In recent years, many industrial users have hardly been able to avoid 3D printing processes. A hype similar to the AI hype arose. This is somewhat flattened, notes Peter Leibinger. The Trumpf deputy boss is sticking to his sales target: he expects 500 million euros in five to seven years. 3D printing will “prevail, but not so disruptively that there will be no other processes,” he explained in an interview at the Formnext trade fair.
In a comment, Robert Weber wrote: Additive manufacturing, or 3D printing, means developing new products, testing new business models, getting to know new materials and their properties, operating new hardware, relearning design and automating the process chain, to… Connect and digitize customers. It can hardly be more complex. And for this, in addition to robots, above all people are needed, because only through them can the digital twin be created from the design. And Charles Hull, the co-inventor of the technology, is therefore right when he calls for “ingenuity and foresight, passion and perseverance” because this is tiring for people and the industrial process.
The automation of printing processes is still a challenge for many companies, but some are already further and use AI methods. An example of this is the company Protiq from Blomberg, a spin-off from Phoenix Contact. Why does the company use deep learning methods?
Protiq produces a lot of custom parts in its printers. The Blomberg company uses the Selective Laser Sintering (SLS) process for this. The advantage of this technology: Users can not only produce one component in one installation space, but any number of different components. Since these are nested three-dimensionally in space, the installation space can be better used. From the customer interface on the network to the printer, actually everything is automated at Protiq. An employee only has to release the installation space, i.e. take out the finished product, rework it and send it to dispatch. In the past, the assignment of these parts to the respective customer order involved a great deal of effort. At this point, an algorithm has solved the problem. Together with the University of Paderborn, the engineers developed a new type of technology that automates this component recognition using deep learning methods. Tobias Nickchen shared responsibility for the project on the research side. “Our system has to recognize new components every day,” he underlines the challenge in the podcast interview. Deep learning systems are able to independently learn numerous non-linear problems using existing training data. Manual feature engineering is therefore no longer necessary. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features. which automates this component recognition using deep learning methods. Tobias Nickchen shared responsibility for the project on the research side. “Our system has to recognize new components every day,” he underlines the challenge in the podcast interview. Deep learning systems are able to independently learn numerous non-linear problems using existing training data. Manual feature engineering is therefore no longer necessary. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features. which automates this component recognition using deep learning methods. Tobias Nickchen shared responsibility for the project on the research side. “Our system has to recognize new components every day,” he underlines the challenge in the podcast interview. Deep learning systems are able to independently learn numerous non-linear problems using existing training data. Manual feature engineering is therefore no longer necessary. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features. “Our system has to recognize new components every day,” he underlines the challenge in the podcast interview. Deep learning systems are able to independently learn numerous non-linear problems using existing training data. Manual feature engineering is therefore no longer necessary. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features. “Our system has to recognize new components every day,” he underlines the challenge in the podcast interview. Deep learning systems are able to independently learn numerous non-linear problems using existing training data. Manual feature engineering is therefore no longer necessary. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features. Instead, on the basis of the training data, the system independently acquires so-called deep features. In the case of sorting, these are internalized in such a way that the individual objects can be differentiated very well by the features.
Component recognition in classic series production usually has to record manually defined products or components in advance. Because the AI at Protiq has to independently adapt to the new components in every production run, the system is constantly learning. The database for this is represented by 3D images from CAD data. “The system is trained with this,” says Tobias Nickchen.
In production, the AI uses camera technology to compare the real component images with the orders and thus recognizes where they belong. The corresponding components can then be visually marked on the scanning area for each order. The advantage: The system sorts quickly, minimizes the manual effort and reduces errors – a benefit in the digital and real process chain.
AI as an assistance system
You know the challenge: If you want to assemble a new cabinet yourself, then you open the packages from the furniture store and rely on the fact that the workpieces, the shelves, the back wall or the cabinet doors are the right size for the assembly, i.e. they are correct cut, sorted and packed. In order for the cabinet to please the buyer, ML methods support the manufacturers of the furniture.
Benedikt Buer is one of the developers of Homag’s Intelliguide assistance system. Homag develops and manufactures woodworking machines and Benedikt Buer works in the area of pallet dividing technology. Intelliguide uses Homag on the saw. The process: The employee inserts a wooden panel into the saw. The machine saws the board according to the specifications of the cutting plan and the finished workpieces have to be sorted correctly after production. Errors can occur when supplying the saw and when sorting. That is why the developers have developed a camera-based assistance system. This uses a laser projector to inform the operator if he/she has inserted the panel incorrectly, if he/she has positioned the workpieces incorrectly or if he/she wants to start the work process at the wrong place.
The camera detects the workpieces and immediately analyzes in the machine whether the panel has been correctly inserted or sorted. The developers speak of embedded intelligence. The algorithm has learned from examples what a board looks like. He can then make a statement about how many boards are in the picture, where they are and how big they are. Homag trained the algorithm with lots of sample data. According to Homag, he teaches himself the connection. The developers positioned workpieces in the handling area and gave the algorithm the information: “Here you can see the workpieces, the colors in these positions and in these sizes.” The system was trained in the cloud because large amounts of data were generated during training. Homag used a neural network. “It’s a special kind of algorithm,” explains Benedikt Buer. The neural network takes an image and divides it into multiple groups, each pixel into multiple groups — that’s a classifier. Each pixel of the image is evaluated as to whether it is a pixel that belongs to a workpiece or a pixel that does not belong to a workpiece. There are many examples of these classifiers. “Anyone who is interested in this should deal with segmenters. Google Scholar, for example, provides a first suggestion,” reports Benedikt Buer. “But you still have to put your brain wax into it. But it is a good starting point.” whether it is a pixel that belongs to a workpiece or a pixel that does not belong to a workpiece. There are many examples of these classifiers. “Anyone who is interested in this should deal with segmenters. Google Scholar, for example, provides a first suggestion,” reports Benedikt Buer. “But you still have to put your brain wax into it. But it is a good starting point.” whether it is a pixel that belongs to a workpiece or a pixel that does not belong to a workpiece. There are many examples of these classifiers. “Anyone who is interested in this should deal with segmenters. Google Scholar, for example, provides a first suggestion,” reports Benedikt Buer. “But you still have to put your brain wax into it. But it is a good starting point.”
What is the advantage of the system? Overall equipment effectiveness increases, fewer rejects are produced, fewer mistakes are made and Homag customers can train employees more quickly. Fewer than ten Homag employees were involved in the project. These came from the fields of mechanics, electronics, software and ML. “A project like this is demanding and if you as a company have no experience with ML projects, then you should get an external expert to help you with the first steps,” advises Benedikt Buer.
To read the original article, click here. You can listen to more about AI in Rober Weber’s AI podcast.