56

Image to LaTeX

 4 years ago
source link: https://www.tuicool.com/articles/vaMfEjR
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Galileo, the father of modern science, said “the book of nature is written in the language of mathematics” and in modern times, mathematics is written in the language of LaTeX. All scientific disciplines from psychology to artificial intelligence use LaTeX as a tool for communicating their brilliant ideas through beautiful, perfectly formatted equations. Unfortunately, there is overhead in taking an equation and transcribing it into the LaTeX language. Im-2-LaTeX is a project with the objective of reducing this overhead for scientists.

Problem Statement‍

In order to reduce the amount of time a scientist takes to write a LaTeX equation, we created an automated process which translates images of formulas into LaTeX code for the user. We hope by utilizing this application, our users can free themselves from learning and spending time creating correct LaTeX, but rather focus on what’s really important - their work. A scientist could capture an equation that already exists in a paper or on the internet and instantly get the LaTeX code to modify it to fit their purpose. By leveraging deep learning, we managed to train a model that performs better than the public state of the art for this task.

Previous Work

This project heavily referenced the Harvard paper: What You Get Is What You See . In the paper the authors use a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism. Our work heavily stands on the shoulders of the Harvard research group. As will be seen in the model section, we built upon their encoder-decoder architecture.

The Dataset

This problem was inspired by an OpenAI request for research prompt. The Harvard paper published a data set: Im2LaTeX-100K - that contains a prebuilt dataset for the image-2-latex system. It includes a total of ~100k  latex formulas and rendered images that are collected from arXiv, which are split into train, validation and test sets. Each image is a PNG image of fixed size. Formulas are in black and the rest of the image is transparent. Before model training we performed heavy processing on the  data. For example, equation normalization, since equations have multiple ways of being written.

Data example:

5d1502c0c4ad42361d96cd73_h2ec9TG0ENfCKyurJmB9jjOxyKO-AuhiGZMoBdKDHfbihdBEq5J1a3PQ3vwysxez-J4j6p0iPmQyRZNCOtA3n7CuHNqnO1xRSADL7ZkjAjQ2aP-ET82SxKphDcCHdg-0G_6Rc6sd.png

Model Architecture

During experimentations, we tested two models. The first was a model that used a naive CNN encoder and GRU decoder with Bahdanau attention. This model was treated as a baseline since it was already implemented as an image captioning tutorial for Tensorflow 2.0, making relatively straightforward to use on our dataset.

Our final model architecture was based on the Harvard paper - we’ve essentially used Tensorflow 2.0 to implement a model based on the specifications given in the paper. The model consisted of three main components:

  • Convolutional neural network (CNN) encoder
  • Bidirectional LSTM row encoder
  • LSTM decoder with Luong-style attention

We designed the convolutional neural network without a fully connected layer, so that it can handle inputs of any shape. The purpose of the row encoder is to localize the relative positions in the images by scanning through each row. The final component is an LSTM decoder that was designed with a Luong attention mechanism which builds context vectors for better learning.

5d1502c18348e855deecd877_iJPMaF7whYcRalNJ582RRd5bFxlKj8gU9JLyOUwN2cjGTbigYaTT95wIX_G8Sa6X61WdDGh4hXn4BVz9gEA-LdmqDsLwU4NwuLPu3AUmH38dwA5YDtJkjlUCKkwecPhOoSVGJ7dP.png

Training

During training, we experimented with several key components to improve against the baseline. For optimizers, we tried with Stochastic gradient descent, Adam and RMSprop. For initialization techniques, we tried Glorot Uniform and He normal. Additionally, we experimented with different learning rates and batch sizes.

After multiple tests, we discovered that the SGD optimizer with an approach of updating learning rate based on loss plateauing, a batch size of 32 and He normal initializer gave the best results. The loss plot can be seen below.

5d1502c1234b425e2a4bf36b_B1Lau46ZJa-QgFniqlaiHvEAteJJvsTcJxfXXWh6kj8vZvE_JMUgaNP7-T-ZOdP3YkGj_tpOm7RArf4p4w4497v4jZG4yzUY6taOXj85GXKbH-8hl81X6kfp0vXGCEzeJs6RNUDr.png

We compared the loss among the baseline model, state-of-art model, and our experimental model as shown above. The loss is calculated using cross entropy loss function, and the train loss from our best performance model was lower than the state-of-art model.

In addition, we adopted the evaluation metrics from the Harvard paper - perplexity score:

5d1503148348e8c4b7ecda6e_CtY0se6OSnOLDvdDCzsY-cDf2KnQyueqmW1S5bKyKWTl4rqMBDo3isVBzec50mHaBQMztasT1xXuIpsEd2HElYwr5NBgj5LrdylO4NUEra1tfA2o_LmemmTNy2prSgVoLDA01hI9.png

The perplexity score for the training and validation datasets were also lower than the state-of-art model.

Sample Images

Below are some examples of images given to the model and the rendered output predictions. All images were taken from the test set - a subset of the data that was never used for model training or hyperparameter tuning.

5d15040e8b8f92001884918d_5maUKRKyzTYizAPkY4l5CfadWyV326skAGEV6FdLQ3x7sdIz2UDO6_e-fiyjDr47ud0YRg7sXnVZyARwNoUPun3Uaty4vJ_lzXxkU0Ve-v3lA1oXNLxhWeoYhYG13VsY2RUlej0v.png

5d150432c4ad42607796d1f5_ulBtTMynrP_KnHv6ifpDL9SRyY_F4AEIsZRh02i_CVEAi_xCHcVBbyYpH_TAj-Mp9iqBirfu5UJZGjnJZDc59hMk9EduUdLPb9Q1yoWYYAv9TdMOym-82awdeilZL_iVxJgCUvbV.png

5d152299c4ad4274c9979978_lHOzJjrOzPm7C6zxemrkAKLiFejdUyYDV6O5M1nyIiNFAMnSwNUEdzPxeOYpGYXQjbqn0H5-kJHEx-uE1mh0OH0Nq_JHM0HiJxlhnhS8Lh7bpNmecBqbrjV9oBF9Xbf_EHtpkLdX.png
5d1522dc2c2ac735c78d6756_tl2-GXxKg3Pgxc94leYgmOq7u_GcQzuzEgWOS9j9N9lG3idCz6KFq8n1zvwv8bwUtAiRPFlCMtvRXmZXEam1Mkpa9cLLQt8hj25Ow08_TJMaWogphlDo83-G9LhE11AGyxiIySmb.png

The most surprising characteristic about the model is its ability to write complete LaTeX syntax and allow us to render images with no changes. Only the last example with the multiple equations needed to have \end{array} added to the end for it to render. Eventually we would like to deploy the model so users can take screenshots of equations and test it out themselves.

Current Challenges & Future Work

Although our final model is able to achieve high predictive performance on the dataset, it still gives poor predictions on conventional screenshot images of equations. This represents a significant distribution shift between pre-processed input images in the dataset and images that are expected from regular users (see figure below).

5d15036f2c2ac787168c8a16_Fl0yki0UvtTFIzPw5kf2tekizYfk7OIgYNRPIvyx-k3_iywCteep11RvuqFhtECJS1isjMvkkpkD6cGcKSf-oQkkm4EuxWPd-nXllyJ26lLsFYfKoIP2l7m6RC9vmX7DocCkgNU9.png

The above figure illustrates the effect of the difference between an image from the dataset (pre-processed) and an image from a screenshot. (A) shows that the model gives an accurate output prediction on the dataset image. (B)  shows that the model gives a poor output prediction on the screenshot image.

In order to be able to deploy our model as a useful software tool for users, we must first address this distribution shift. From our initial experiments, we identified that the main reason behind this shift is likely due to the rigid preprocessing pipeline for the dataset. The pre-processed input images are not an accurate representation of expected user inputs. We are currently exploring two main methods to address these issues:

1. Create a flexible pre-processing pipeline with random image augmentation - the augmentations should capture the distortions that are common in screenshot images

2. Replace hard-coded pre-processing with an additional deep learning component that can learn the optimal mapping from raw to preprocessed images - although this may be more difficult, we believe that it would represent the most promising route towards general optical character recognition (OCR)

Conclusion

We are excited about the performance of the model and look forward to continue trying to improve it. We plan on using Weights & Biases’ newly released hyperparameter tuning feature as well as adjusting the dataset to make the acceptable inputs less rigid.

Thank you to the Weights & Biases for putting on the applied deep learning course which allows us to work on amazing projects with other deep learning enthusiasts.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK