Portada » TwinCAT Machine Learning offers further inference engine

TwinCAT Machine Learning offers further inference engine

by admin
64 views

With TwinCAT Machine Learning Server as an additional inference engine, TwinCAT Machine Learning also meets the increasingly growing requirements of machine learning (ML) or deep learning for industrial applications. This is because ML models are becoming more and more complex, the execution speed is expected to increase, and greater flexibility of inference engines is demanded with respect to ML models.

TwinCAT Machine Learning Server is a standard TwinCAT PLC library and a so-called near-real-time inference engine, i.e., in contrast to the two previous engines, it is not executed in hard real time, but in a separate process on the IPC. In return, basically all AI models can be executed in the server engine and this with full support of the standardized exchange format Open Neural Network Exchange (ONNX). Furthermore, there are AI-optimized hardware options for this TwinCAT product that enable scalable performance.

The TwinCAT Machine Learning Server can operate in classic parallelization on CPU kernels, either using the integrated GPU of the Beckhoff Industrial PCs or accessing dedicated GPUs, e.g., from NVIDIA. This provides an inference engine with maximum flexibility in terms of models and high performance in terms of hardware.

Applications can be found in predictive and prescriptive models as well as in machine vision and robotics. Examples include image-based methods for sorting or evaluating products, for defect classification as well as defect or product localization, and for calculating gripping positions.

Related News