{"id":7229,"library":"flair","title":"Flair NLP","description":"Flair is an open-source framework for state-of-the-art Natural Language Processing (NLP) built on PyTorch. It provides a simple, unified interface for various NLP tasks like named entity recognition, sentiment analysis, part-of-speech tagging, and text classification, with robust support for multilingual models and embeddings. Currently at version 0.15.1, Flair maintains a regular release cadence, often monthly or bi-monthly, consistently adding new features and addressing bug fixes.","status":"active","version":"0.15.1","language":"en","source_language":"en","source_url":"https://github.com/flairNLP/flair","tags":["NLP","Natural Language Processing","Deep Learning","PyTorch","Text Processing","Embeddings","NER","Sentiment Analysis","Text Classification"],"install":[{"cmd":"pip install flair","lang":"bash","label":"Latest stable release"}],"dependencies":[{"reason":"Flair is built directly on PyTorch and requires it for core functionality and model training.","package":"pytorch","optional":false},{"reason":"Used for integrating and fine-tuning transformer-based models and embeddings.","package":"transformers","optional":true},{"reason":"A dependency for certain word embedding functionalities.","package":"gensim","optional":true},{"reason":"Used for sentencepiece tokenization models.","package":"sentencepiece","optional":true},{"reason":"Used for Byte-Pair Embeddings.","package":"bpemb","optional":true}],"imports":[{"symbol":"Sentence","correct":"from flair.data import Sentence"},{"note":"While older tutorials might show `flair.models.Classifier`, the current recommended path for loading pre-trained taggers (like 'ner', 'sentiment') is `flair.nn.Classifier`.","wrong":"from flair.models import Classifier","symbol":"Classifier","correct":"from flair.nn import Classifier"},{"symbol":"SequenceTagger","correct":"from flair.models import SequenceTagger"},{"symbol":"TextClassifier","correct":"from flair.models import TextClassifier"},{"symbol":"ModelTrainer","correct":"from flair.trainers import ModelTrainer"},{"symbol":"WordEmbeddings","correct":"from flair.embeddings import WordEmbeddings"},{"symbol":"TransformerDocumentEmbeddings","correct":"from flair.embeddings import TransformerDocumentEmbeddings"}],"quickstart":{"code":"from flair.data import Sentence\nfrom flair.nn import Classifier\n\n# Make a sentence\nsentence = Sentence('I love Berlin and New York.')\n\n# Load the NER tagger\ntagger = Classifier.load('ner')\n\n# Run NER over sentence\ntagger.predict(sentence)\n\n# Print the sentence with all annotations\nprint(sentence)\n\n# Example for sentiment analysis\nsentence_sentiment = Sentence('Flair makes NLP so easy!')\nsentiment_model = Classifier.load('sentiment')\nsentiment_model.predict(sentence_sentiment)\nprint(sentence_sentiment)","lang":"python","description":"This quickstart demonstrates how to perform Named Entity Recognition (NER) and sentiment analysis using Flair's pre-trained models. It involves creating a `Sentence` object, loading a `Classifier` for a specific task (e.g., 'ner' or 'sentiment'), and then calling `predict()` on the sentence."},"warnings":[{"fix":"Upgrade your Python environment to 3.9 or higher.","message":"Python 3.8 support has been deprecated and effectively dropped starting from Flair v0.15.0. Earlier versions (0.13.x) set 3.8 as a *minimum* requirement, but current versions require Python 3.9+.","severity":"breaking","affected_versions":">=0.15.0"},{"fix":"Adjust your training code: `trainer = ModelTrainer(model, corpus)` and then `trainer.train(..., optimizer=torch.optim.AdamW)`.","message":"The `ModelTrainer` API changed in v0.10. The `optimizer` argument is no longer passed during the `ModelTrainer`'s initialization, but instead as a parameter to the `train()` or `fine_tune()` methods.","severity":"breaking","affected_versions":">=0.10.0"},{"fix":"Upgrade Flair to the latest version (0.15.1 or newer) to ensure compatibility with recent PyTorch and SciPy releases. If you must use an older Flair version, pin your PyTorch and SciPy versions to known compatible ones (e.g., from the Flair requirements.txt of that version).","message":"Older versions of Flair (prior to v0.15.1) may experience compatibility issues with newer versions of PyTorch and SciPy.","severity":"gotcha","affected_versions":"<0.15.1"},{"fix":"If you were using this module, it is no longer available. You may need to find an alternative clustering solution or adapt your code.","message":"The `flair.models.clustering` module has been completely dropped due to lack of usage and to acknowledge a CVE.","severity":"deprecated","affected_versions":">=0.15.0"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Ensure Flair is installed in your active environment: `pip install flair`.","cause":"The Flair library is not installed or the Python environment where it was installed is not active.","error":"ModuleNotFoundError: No module named 'flair'"},{"fix":"Install PyTorch separately following the official instructions from [pytorch.org](https://pytorch.org/get-started/locally/) for your specific OS, CUDA version, and Python. Then, `pip install flair`.","cause":"PyTorch, a core dependency, often has specific installation requirements, especially for GPU support, that `pip install flair` might not fully resolve automatically in all environments.","error":"Could not find a version that satisfies the requirement torch"},{"fix":"Remove `optimizer` from `ModelTrainer` initialization. Pass it to the `train()` or `fine_tune()` method instead. \n`trainer = ModelTrainer(model, corpus)` \n`trainer.train('output_path', optimizer=torch.optim.AdamW, ...)`","cause":"This error occurs in Flair versions 0.10.0 and later because the `optimizer` argument was moved from the `ModelTrainer` constructor to its `train()` or `fine_tune()` methods.","error":"TypeError: ModelTrainer.__init__() got an unexpected keyword argument 'optimizer'"},{"fix":"Ensure your target labels are cast to a floating-point type (e.g., `torch.float`) before passing them to the loss function during training, particularly if you are using binary cross-entropy or regression losses.","cause":"This often happens during training if your labels (targets) are integers (Long) but the loss function expects floating-point values, which can occur with certain classification setups, especially for regression or specific multi-label scenarios if not handled correctly.","error":"RuntimeError: 'target' must be of floating point type, but got Long"}]}