Training a token classification model with
Last week we covered how to use the tokenizer library to get our data to a state where we can train token classification tasks. This week, well pick up where we left off and go over how to train a model and use it for inference on a Named Entity Recognition (NER) task. Well cover the various strategies you can use in labeling your tokens, scoring the results, and also the metrics youll likely want to use. Signup for the meetup series here: Follow: Wayde Gilliam: Sanyam Bhutani: Get started with W B: Follow us: Twitter: Linkedin:
|
|