-
Notifications
You must be signed in to change notification settings - Fork 91
Is it possbile to performe ocr task using RNNSharp ? #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes. RNN has good performance on OCR task, such as hand writing recognition. Generally, you need to segment handwritten word images at first, and then recognize each word in the image. The feature selection is the key important part. You can design and combine your feature manually, and RNN can also help you to generate features automatically, such as embedding input pixels into vectors. With reasonable feature set, you can choose a classifier to detect which word is in the image. |
I'm a newbie on Machine Learning but very interested in it. Best regards, |
Signed-off-by: Zhongkai Fu <[email protected]>
#2. Update: Train can be ended early if current PPL is larger than the previous one Signed-off-by: Zhongkai Fu <[email protected]>
#2. Using error token ratio to verify validated set performance
#2. Improve encoding performance by SIMD instructions
…ce by SIMD instructions" This reverts commit 1a3070c.
#2. Execute CRF forward-backward in parallel
#2. Normalize LSTM cell value in weights updating
…o encoder is used. #2. For seq2seq autoencoder, concatenate first top hidden layer and last top hidden layer as final encoder output for decoder.
Hi there,
recently I wanna do some tests about sequence labelling(OCR without segment) via RNN
I googled and found this project , Thanks for your efforts
I have hundreds of handwritten word images and the corresponding word
Would you like to give me some instruction about this problem
any advice will be welcomeed. thanks in advance !
Best regards,
The text was updated successfully, but these errors were encountered: