NITS-VC System for VATEX Video Captioning Challenge 2020
Video captioning is process of summarising the content, event and action of
the video into a short textual form which can be helpful in many research areas
such as video guided machine translation, video sentiment analysis and
providing aid to needy individual. In this paper, a system description of the
framework used for VATEX-2020 video captioning challenge is presented. We
employ an encoder-decoder based approach in which the visual features of the
video are encoded using 3D convolutional neural network (C3D) and in the
decoding phase two Long Short Term Memory (LSTM) recurrent networks are used in
which visual features and input captions are fused separately and final output
is generated by performing element-wise product between the output of both
LSTMs. Our model is able to achieve BLEU scores of 0.20 and 0.22 on public and
private test data sets respectively.