Quick Concepts: Active Learning
What is active learning?
The term “active learning” is an educational concept that emphasizes student engagement and participation over passive listening and reading. In machine learning, active learning refers to a semi-supervised learning approach in which only a subset of a dataset is labeled. Here, the model determines which data will be most useful or informative for training, then queries a human or a fully labeled dataset for the correct labels for these data points. Active learning methods can save human effort, time, and cost in model training by pre-selecting the most helpful data points to label rather than labeling an entire dataset.
How does active learning work?
Active learning can be implemented in multiple ways, and it is important to determine the best approaches and sampling methods for each project.
There are three primary approaches to active learning:
- Pool-based sampling, the most commonly used method, where the model evaluates the entire dataset or subset and determines which data points would be most useful for training. This can require a large amount of computational power and memory.
- Stream-based sampling, where the model assesses each data point individually and decides whether to query its label. This can require a large amount of manual labeling.
- Membership query synthesis, where the algorithm generates its own synthetic data points for training. This method currently has limited applications because it relies on the ability to create accurate synthetic data.
There are also several sampling methods or querying strategies used in active learning, including:
- Least confidence, where the model queries the data points with the lowest confidence predictions
- Smallest margin, where the model queries data points with the tightest margin between the two strongest predictions
- Query by committee, where the dataset is presented to multiple models, and the data points with the highest disagreement between models is selected for querying.
Where is active learning used, and what are the tradeoffs?
Active learning lends itself to applications such as natural language processing and medical imaging, where it may be difficult or cumbersome to obtain large labeled datasets. This humans-in-the-loop approach is typically useful in cases where there is a large pool of unlabeled data and limited time, staff, or budget to label it, or where there is limited data that needs to be used judiciously to avoid errors and bias. When applied effectively, active learning can achieve high accuracy in less time, with less human involvement, than traditional supervised learning approaches. However, the reduced human cost must be weighed against the increased computational power and iterative cycles required to train models through active learning.
Accelerate AI with Annotated Data
Check Out this Article on Why Your Model Performance Problems Are Likely in the Data