Hire the author: Harshit S
Introduction
Javascript is turning into a fascination for people involved in developing machine learning applications. The language seems to be in fashion as it allows the development of client-side neural networks, thanks to Tensorflow.js and Node.js.
Client-side development allows using local data without the hassle of transfering of data over the internet, and the application needs only a web browser for execution. No additional installations or prerequisites are required for using the application.
In this article, you will be reading about how to introduce yourself to using Tensorflow.js with an example of developing a language translator.
Glossary
Key terms:
Client-side: It means that the code runs on the client machine, which is the browser.
Yarn: Yarn is a new package manager that replaces the existing workflow for the npm client and compatible with the npm registry making things easier to process.
Tensorflow.js: Tensorflow.js is a library for machine learning in JavaScript. Develop ML models in JavaScript, and use ML directly in the browser or in Node.js
Requirements
Python
Yarn
npm package
Node.js
Tensorflow.js
Keras (optional)
Tensorflow (optional)
The entire code to the project is present in the GitHub repository.
Step-by-step
Let’s move forward towards simultaneously achieving our goal and learning:
Step 1: Initialization
Lets begin by installing the necessary libraries. The installation process for Linux platform has been covered below. I have a Linux subsystem installed in my windows 10 which makes things much easier to work with.
- First update the existing system using code: !apt-get update
- We are going to be using Node. The recommended way of doing this is-
- curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
- sudo bash nodesource_setup.sh
- sudo apt install Node.js.
- We install yarn using the command npm install -g yarn.
- Tensorflow.js is a library that would be required for creating the model that would translate the language given to our application as input. To install this run pip3 install Tensorflow.js.
Step 2: The Translator Model
Let’s code the main Javascript file.
We would require to import Tensorflowjs, create a loader and UI file according to our desired working of the web application.
import * as tf from '@tensorflow/tfjs';
import * as loader from './loader';
import * as ui from './ui';
Let us now create a way by which we can fetch the translator model. In this tutorial, we have used a pre-trained model. However, you can make another translator by transfer learning on the same pre-trained model.
const HOSTED_URLS = {
model:
'https://storage.googleapis.com/tfjs-models/tfjs/translation_en_fr_v1/model.json',
metadata:
'https://storage.googleapis.com/tfjs-models/tfjs/translation_en_fr_v1/metadata.json'
};
const LOCAL_URLS = {
model: 'http://localhost:1235/resources/model.json',
metadata: 'http://localhost:1235/resources/metadata.json'
};
Our translator model has two parts:
- An encoder that takes in the English language and encodes it to produce an output that can subsequently be fed to the decoder network.
- A decoder component that can take this new representation and map it to a French translate, thus learning these and working as a complete translator.

Let us initialize our translator class by loading the metadata and the model we need to prepare the encoder and the decoder part of our translator.
class Translator {
/**
* Initializes the Translation model.
*/
async init(urls) {
this.urls = urls;
const model = await loader.loadHostedPretrainedModel(urls.model);
await this.loadMetadata();
this.prepareEncoderModel(model);
this.prepareDecoderModel(model);
return this;
}
We need to create the metadata loading function in the manner shown below.
async loadMetadata() {
const translationMetadata = await loader.loadHostedMetadata(this.urls.metadata);
this.maxDecoderSeqLength = translationMetadata['max_decoder_seq_length'];
this.maxEncoderSeqLength = translationMetadata['max_encoder_seq_length'];
console.log('maxDecoderSeqLength = ' + this.maxDecoderSeqLength);
console.log('maxEncoderSeqLength = ' + this.maxEncoderSeqLength);
this.inputTokenIndex = translationMetadata['input_token_index'];
this.targetTokenIndex = translationMetadata['target_token_index'];
this.reverseTargetCharIndex =
Object.keys(this.targetTokenIndex)
.reduce(
(obj, key) => (obj[this.targetTokenIndex[key]] = key, obj), {});
}
The metadata we have provides us with important information such as the maximum sequence length for both the encoder and decoder. The token index holds the value of the index of the character where our predictor is in the phrase sequence.
Step 3: Encoder and Decoder
With the basic ingredients ready, let’s proceed by segmenting out the model information obtained in the previous step to build our prepareEncoderModel function.
prepareEncoderModel(model) {
this.numEncoderTokens = model.input[0].shape[2];
console.log('numEncoderTokens = ' + this.numEncoderTokens);
const encoderInputs = model.input[0];
const stateH = model.layers[2].output[1];
const stateC = model.layers[2].output[2];
const encoderStates = [stateH, stateC];
this.encoderModel =
tf.model({inputs: encoderInputs, outputs: encoderStates});
}
The number of encoder tokens is the number of characters we are providing as input to the model. In this example, we have considered three words per phrase. The hidden state acts as the intermediary between the encoder and the decoder. The cell states mentioned as stateC are a part of the LSTM network that helps us to tackle the vanishing gradient problem and learn long term sequences. These states can be retrieved from the model metadata and we can build the model using the tf.model function of the tensorflow.js.
We perform similar operations to build our decoder as shown below.
prepareDecoderModel(model) {
this.numDecoderTokens = model.input[1].shape[2];
console.log('numDecoderTokens = ' + this.numDecoderTokens);
const stateH = model.layers[2].output[1];
const latentDim = stateH.shape[stateH.shape.length - 1];
console.log('latentDim = ' + latentDim);
const decoderStateInputH =
tf.input({shape: [latentDim], name: 'decoder_state_input_h'});
const decoderStateInputC =
tf.input({shape: [latentDim], name: 'decoder_state_input_c'});
const decoderStateInputs = [decoderStateInputH, decoderStateInputC];
\\lstm retrieval
const decoderLSTM = model.layers[3];
const decoderInputs = decoderLSTM.input[0];
const applyOutputs =
decoderLSTM.apply(decoderInputs, {initialState: decoderStateInputs});
let decoderOutputs = applyOutputs[0];
const decoderStateH = applyOutputs[1];
const decoderStateC = applyOutputs[2];
const decoderStates = [decoderStateH, decoderStateC];
\\fully connected layer
const decoderDense = model.layers[4];
decoderOutputs = decoderDense.apply(decoderOutputs);
this.decoderModel = tf.model({
inputs: [decoderInputs].concat(decoderStateInputs),
outputs: [decoderOutputs].concat(decoderStates)
});
}
There exist latent dimensions whose shape is used to define the decoder states. We consider it one less because we have to predict the next character in the sequence. Finally, the model ends with a fully connected layer giving the probability of the character predicted.
Step 4: Web Styling
The HTML and CSS files can be designed accordingly. The end result of the application contained in the GitHub repository looks as below.

https://www.hiclipart.com/free-transparent-background-png-clipart-dtcqb
The other supporting files can be found on GitHub and can be developed with the help of the TensorFlow.js example guide.
Learning tools
The learning materials to Javascript and Tensorflow.js have been hyperlinked, in addition to which, you can also refer to the video linked here to understand the concept of Tensorflow.js.
Learning Strategy
The initial step to developing such applications is to have a sound knowledge of javascript. One should have a basic knowledge of Tensorflow.js and machine learning concepts. These can be accomplished by referring to the material mentioned above. On a personal level, I had difficulties in setting up the yarn because I was continuously running into error due to the version installed not supporting the other needed dependencies. I searched and got rid of the problem with the code of the installation mentioned above.
Reflective Analysis
The project has improved my experience in the field of natural language processing. Mostly reading through theoretical knowledge in the domain tends to frustrate people into quitting without realizing the true potential it carries. Learning about Tensorflow.js and Node.js adds a new dimension to one’s skill set that makes working on web artificial intelligence much simpler and cooler. I had aimed to produce this application over a weeks time. I was smoothly able to achieve the task in the given time and so can you. You can checkout the project on QR ticket scanning application here for a better understanding of the potential of Node.js.
Conclusion
You should definitely go for this method while building online artificial intelligence applications and should also experiment with the available online pre-trained models. You can train one from scratch and employ it use using javascript technologies. One can visit the GitHub repository for this example and take inspiration to create something of great added value.
Nice article! What is the difference between this autoencoder model and the famous variational autoencoder model? Also, I googled and found this interesting site on machine language translation: https://paperswithcode.com/task/machine-translation
Does tensorflow.js using the same 2014 French to English dataset?
Hi! Thanks for your read and comment. To answer your question now, An autoencoder learns a compressed representation of the input. First there is a compression via an encoder followed by a decompression via a decoder. A Variational Auto Encoder (VAE) learns the parameters of a probability distribution representing the data giving us a measure of uncertainty in addition to above.
The dataset used for the purpose is available at http://www.manythings.org/anki.