ATAWALPA: artificial intelligence for small industrial companies

Fede Amigone
5 min readFeb 5, 2021

ATAWUALPA is an experimental software designed for small industrial companies to test new technological capabilities and explore paths towards Industry 4.0.

Industrial companies are usually involved in production processes involving physical quantities such as quantities, pressures, temperatures, levels, etc. These physical processes always take place in a time dimension, i.e. they produce a time series. Time series as a data structure have very special mathematical properties and present challenges of their own. The most challenging of these is their prediction.

Is it possible for a small industrial organization devoid of a large technology infrastructure and lagging far behind the bewildering standards of technological transformation to forecast the future of its processes? Yes, and that is the reason for this post.

ATAWALPA is a technology stack that “listens” to sensors, stores their behavior and generates predictive models from that. It is designed to be used with a focus on exploring opportunities for approaching Industry 4.0 logics. ATAWALPA offers as an extensible platform 3 basic functionalities: management of tags associated to physical sensors, telemetry of that tag and finally, prediction of the future value at N steps in time.

For example, an industrial company could monitor the temperature of a cold room. The chamber would have a temperature sensor that would publish messages under the MQTT protocol under a tag, say, “temperature01” indicating that the event corresponds to cold chamber #1. In ATAWALPA you would only have to create the tag “temperature01” and the temperatures would be automatically received and displayed through your telemetry monitor.

Normally, sensors produce a large volume of data in a short time. ATAWUALPA uses these events and organizes them as a supervised machine learning problem. Therefore, after some time listening to the sensor, ATAWUALPA already has the necessary information to train a predictive model, with a margin of error whose magnitude must be determined according to the type of problem.

This training process is done in a very simple way and with default parameters. However, it is possible to adjust its performance to increase its predictive capacity depending on the computational power available on the edge. This process is no more than changing a few parameters and re-training and validating the model with a couple of clicks.

When the model has been trained and validated with an acceptable margin of error, a prediction can be generated at N time steps in the future (N is also a model parameter).

The underlying architecture of the proposal can be seen in Fig. 01.

Fig. 01: ATAWALPA architecture

ATAWALPA is developed on MeteorJs, so it proposes a stack based on NodeJs. It connects to a Mosquitto server storing in MongoDB all the events for which there are tags created. Deep learning is done through the TensorFlow.js API, plotting in Plotly with React for the user view.

Some ATAWALPA views for the described functionalities can be seen below:

Fig. 02: Tag Manager

As shown in Fig. 02, ATAWALPA offers a tag manager to which different sensors can be associated. For example, four temperature sensors could be registered for cold chamber #1, another four temperature sensors for cold chamber #2, a level sensor for industrial tank #1 and so on.

By clicking on the left button of the sensor, the telemetry of the sensor is accessed, as shown in Fig. 03.

Fig. 03: Sensor telemetry

Once a certain volume of data has been automatically collected, e.g. a thousand sensor readings, by pressing a button ATAWALPA converts it into a supervised learning problem and uses it to train a predictive model with parameterized stacking LSTM cells. This training process yields a graph whose expected behavior should take the form of a downward curve, since it plots the loss or error that the system generates by predicting as it learns. Thus, as the learning process progresses (a process that can last from seconds to hours, depending on several factors), the smaller the error in predicting values should be.

Fig. 04: Plot of the error generated by the model as the learning process progresses.

Once a graph like the one above or similar has been generated, the predictive model can be validated by comparing the predictions with two sets of data: the set of data used to train the model, and a disjoint set that was reserved to measure how the model behaves with data it never saw. This validation is done by pressing a button and a graph of 3 curves is obtained: one for the actual data read from the sensor, one for predictions that were made with the same data that the model was trained on, and one with data that the predictive model never had available.

Fig. 05: Validation of the predictive model.

Once the model has been trained and validated, a final button can be pressed: prediction.

Fig. 06: Prediction at N time steps in the future.

This whole process can be adjusted depending on how much data and, fundamentally, how much hardware capacity we have on the edge. These parameters are easily adjusted through a module that allows you to modify their default values.

Fig. 07: Hyperparameters of the neuronal architecture.

On this predictive platform, it is possible to extend business logic to converge to added value in the production process. Although it is not a simple path, small industrial companies must begin to take their first steps towards technological transformation in times of Industry 4.0. Perhaps ATAWALPA can be the gateway for some cases.

My LinkedIn profile.

Atawalpa in GitHub.

--

--

Fede Amigone
0 Followers

Deep Learning Industry 4.0 researcher & developer- Master's candidate in computer science (UNCo)