FAQs
- What is Highlighter?
- What is a machine learning project?
- Which machine learning project components does Highlighter support?
- Can I use Highlighter for video?
- Can I use Highlighter for text and audio projects?
- Does Highlighter have pre-trained models?
- How is data exported?
- Can I upload data that has already been annotated?
- How does a team operate?
- Can I monitor the labellers' work?
- How do I create a project?
- How do I manage my team?
- What is an Annotation?
- What is an Object Class?
- What is a Data Source?
- What is a Queue?
- How do you ensure quality control?
- Where do I find statistics about my project?
Highlighter gives you the freedom to focus on machine learning, rather than the supporting software. We take care of the infrastructure, allowing you to develop and deploy custom machine learning models in an efficient and cost-effective way.
Highlighter provides a seamless interface to label images and train models. These models can then be deployed via an API for inference in production. Progressive improvement of the labelling processes is supported through a reporting and feedback loop.
Highlighter currently supports the following project components:
- Assignment of a Team
- Selection of Models
- Import and Annotation of Training-Data
- Initiating Training
- Integrating with Business Services via API
No, Highlighter does not currently support native video. However, you can convert video to frames manually, allowing you to use Highlighter to analyse video data. We are working on a solution to natively support video and hope to launch this later in the year.
No, currently Highlighter cannot be used for text or audio projects.
Highlighter does have some pre-trained models available. Please contact the Silverpond team for more information.
Data can be exported through the queue system in JSON format or PASCAL VOC.
Yes, you can.
There are three roles within a team: owner, manager and labeller. When you create your Highlighter account, you become an owner, which allows you to create other users and manage billing. Managers can assign work, curate guests, supervise team contributions and create API access tokens. Labellers perform the annotation work to build the data set.
As a manager, you can monitor your labellers’ progress, spot check their work and compare their annotations with other labellers.
The main activity involved with managing your team is creating team members with the correct roles and assigning them work via queues. Under your account-icon, you will find a link to “Manage Team".
This will allow the addition of new team members with desired roles. You can also send invitations and delete team-members. Note that if you are setting up an annotation project for the first time then you can add your team members through the new annotation-project wizard.
Annotations are labels added to your images that denote information to certain image regions. This information is most commonly in the form of object-classes and metadata.
An object class is a label that you can use to tag a portion of an image with. The object-classes you set up and assign to projects will appear in the annotation user-interface for the labelling tool to tag images with.
A data source is the location of a collection of images, such as an AWS S3 Bucket. You can use a data-source to feed work to teams through image-queues. A queue will be automatically created for you when you set up a data-source with this wizard. Data-sources can be viewed and synced after creation.
Data-sources can be viewed independently of projects in order to review the images referenced.
A queue is the mechanism via which work is assigned to team members as well as machines. A queue will allow you to create streams of images that can then be fanned out to various team members for annotation or quality assurance tasks or to machine learning workers in order to perform training tasks or inference.
In order to add a quality control process to your project, you simply create and assign an additional queue. The queue takes as input the upstream data that should be quality checked and assigned to the team members who will be performing quality assurance.
These team members can then reject, flag or repair issues with existing annotations before sending them downstream on submission for additional quality control, or use in training or reporting.