Deep learning with
real world impact

Problem: the labelling bottleneck

It's every machine learning researcher's nightmare: the need to source or create large volumes of labelled data. For a generic task, you can crowdsource the labels. It's laborious and time-consuming, but doable.

For an expert task? That's a whole different level: experts are rare, often unavailable, and always expensive.

Spend millions on labels.
Or find a smarter way.

It would cost millions of dollars and take years to get the labels we need for expert tasks. That's wasted time, and wasted money. We found another way.

Solved: interactive learning

We're building all-purpose tagging machinery, based on interactive learning. The result? Our algorithm starts with unlabelled data. And it achieves the same performance as the baseline method, in a fraction of the time. Here's how:

1. Dimensionality reduction

We plot the data in 2D, so we can understand how the algorithm sees it.

2. Information retrieval

We can see where the algorithm struggles most, and focus on that.

3. Transferable learning

Our algorithm can apply knowledge from similar tasks.

4. Scalable infrastructure

So we can run algorithms on hundreds of millions of samples.

A hundred times faster

Tractable algorithms can label as much data in one hour as the baseline method can in a hundred. It’s a choice: go slow and blind, or get fast and interactive.

Join us

Get updates

Stay up-to-date with the latest AI news and views from Tractable