-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
Describe
Model I am Using (LayoutLM):
Following the instructions in Notebook:
https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Add_image_embeddings_to_LayoutLM.ipynb
I was able to alter the notebook locally to use my own dataset. It worked great and the inference on untrained images was reallly impressive.
However, I wanted to implement the same logic outside of a notebook but was having trouble getting the same results after training the model locally, saving it and then at a later time loading that locally saved model and running inference against it. The results aren't great and appear to be the same as if I didn't add the visual embeddings ata ll.
What I did was after adding the image embeddings to LayoutLM and training the LayoutLM model, I saved it locally to the /my_model directory like so:
LayoutLMFOrTokenClassification.layoutlm.save_pretrained('/my_model')
And later when I wanted to run document inference, I basically followed all the steps in the notebook but loaded the LayoutLM model with the directory like so:
layoutlm = LayoutModel.from _pretrained(''my_models')
I'm very new to all this, so please forgiee any terminoly mistakes or other butchering. So how can we save the LayoutLM modelk with the added visual embeddings so that we can load the trained LayoutLM model and the added visual embeddings again later and use the model for inference?