Blog
AI research, ideas and product updates.

Why you need to calculate error bounds on your test metrics

Raza Habib

Machine learning test metrics should always be calculated with credible intervals. Credible intervals give you upper and lower bounds on test performance so you know how big your test needs to be and when to trust your models. Humanloop Active Testing can give you uncertainty bounds on your test metrics and makes this easy.

Announcing Programmatic 4.0

Raza Habib

We're really excited to announce Programmatic 4.0 with support for No-Code Templates — simple UI-based labeling functions that anyone can understand even if they don't know how to program.

Why you should be annotating in-house

Raza Habib

There are huge advantages to labeling in-house such as quality control, faster iteration and privacy. New technologies like transfer learning, programmatic labeling and active learning are now making it practical for the best teams.

Introducing Humanloop Programmatic

Jordan Burgess

Humanloop Programmatic is now available for early access. A powerful weak labeling tool to rapidly annotate your NLP datasets.

Why I changed my mind about weak labeling for ML

Raza Habib

Weak labeling can replace manually annotated data with data created from heuristic rules written by domain experts. Here I want to explain what weak supervision is, what I learned and why I think it complements techniques like active learning for data annotation.

Get started today

Start building custom AI in minutes

Subscribe to our newsletter. Stay in the loop.
© 2020 - 2045 Humanloop, Inc.