- Revised the notebooks to update outdated examples when using
client.create_project()
to create a project
- Add exports v1 deprecation warning
- Create method in SDK to modify LPO priorities in bulk
- Remove backoff library
- Remove LPO deprecation warning and allow greater range of priority values
- Add an sdk method to get data row by global key
- Disallow invalid quality modes during create_project
- Python 3.10 support
- Change return of dataset.create_data_rows() to Task
- Add new header to capture python version
- Updated examples to match latest updates to SDK
- Added methods to create multiple batches for a project from a list of data rows
- Limit the number of data rows to be checked for processing status
- Added global keys to export v2 filters for project, dataset and DataRow
- Added workflow task status filtering for export v2
- Removed labels notebook, since almost all of the relevant methods in the notebook were not compatible with workflow paradigm.
- Updated project.ipynb to use batches not datasets
- Support batch_ids filter for projects in Exports v2
- Added access_from field to project members to differentiate project-based roles from organization level roles
- Ability to use data_row_ids instead of the whole data row object for DataRow.export_V2()
- Cursor-based pagination for dataset.data_rows()
- client.get_projects() unable to fetch details for LLM projects
- Improved the documentation for
examples/basics/custom_embeddings.ipynb
- Updated the documentation for
examples/basics/data_row_metadata.ipynb
- Added details about CRUD methods to
examples/basics/ontologies.ipynb
- Removed numpy version lock that caused Python version >3.8 to download incompatible numpy version
- Improved batch creation logic when more than 1000 global keys provided
- Added example on how to access mark in export v2
- Removed NDJSON library from
examples/basics/custom_embeddings.ipynb
- Removed
queue_mode
property fromcreate_project()
method call.
- Support for ISO format to exports V2 date filters
- Support to specify confidence for all free-text annotations
- Removed backports library and replaced it with python dateutil package to parse iso strings
- Added predictions to model run example
- Added notebook to run yolov8 and sam on video and upload to LB
- Updated google colab notebooks to reflect raster segmentation tool being released on 6/13
- Updated radio NDJSON annotations format to support confidence
- Added confidence to all free-text annotations (ndjson)
- Fixed issues with cv2 library rooting from the Geospatial notebook used a png map with a signed URL with an expired token
- Loading of the ndjson parser when optional [data] libraries (geojson etc.) are not installed
- Support for interpolated frames to export v2
- Removed ndjson library and replaced it with a custom ndjson parser
- Removed confidence scores in annotations - video notebook
- Removed raster seg masks from video prediction
- Added export v2 example
- Added SAM and Labelbox connector notebook
- Global key support to DataRow Metadata
bulk_upsert()
function
- Removed dataset based projects from project setup notebook
- Updated all links to annotation import and prediction notebooks in examples README
- Reduce threshold for async batch creation to 1000 data rows
- Added subclassifications to ontology notebook
- Added conversational and pdf predictions notebooks
predictions
param for optionally exporting predictions in model run export v2- Limits on
model_run_ids
andproject_ids
on catalog export v2 params WORKFLOW_ACTION
webhook topic- Added
data_row_ids
filter for dataset and project export v2
- ISO timestamp parsing for datetime metadata
- Docstring typo for
client.delete_feature_schema_from_ontology()
- Removed mention of embeddings metadata fields
- Fixed broken colab link on
examples/extras/classification-confusion-matrix.ipynb
- Added free text classification example to video annotation import notebook
- Updated prediction_upload notebooks with Annotation Type examples
- Nested object classifications to
VideoObjectAnnotation
- Relationship Annotation Types
- Added
project_ids
andmodel_run_ids
to params in all export_v2 functions
- VideoMaskAnnotation annotation import
- Added DICOM annotation import notebook
- Added audio annotation import notebook
- Added HTML annotation import notebook
- Added relationship examples to annotation import notebooks
- Added global video classification example
- Added nested classification examples
- Added video mask example
- Added global key and LPOs to queue management notebook
- Message based classifications with annotation types for conversations
- Video and raster segmentation annotation types
- Global key support to
ConversationEntity
,DocumentEntity
andDicomSegments
- DICOM polyline annotation type
- Confidence attribute to classification annotations
- Increased metadata string size limit to 4096 chars
- Removed
deletedDataRowGlobalKey
fromget_data_row_ids_for_global_keys()
- Annotation data type coercion by Pydantic
- Error message when end point coordinates are smaller than start point coordinates
- Some typos in error messages
- Refactored video notebook to include annotation types
- Replaced data row ids with global keys in notebooks
- Replaced
create_data_row
withcreate_data_rows
in notebooks
- New data classes for creating labels:
AudioData
,ConversationData
,DicomData
,DocumentData
,HTMLData
- New
DocumentEntity
annotation type class - New parameter
last_activity_end
toProject.export_labels()
- Updated
annotation_import/pdf.ipynb
with example use ofDocumentEntity
class
- Fixed issue where calling create_batch() on exported data rows wasn't working
- Support Global keys to reference data rows in
Project.create_batch()
,ModelRun.assign_data_rows_to_split()
- Support upserting labels via project_id in
model_run.upsert_labels()
media_type_override
param to export_v2last_activity_at
andlabel_created_at
params to export_v2- New client method
is_feature_schema_archived()
- New client method
unarchive_feature_schema_node()
- New client method
delete_feature_schema_from_ontology()
- Removed default task names for export_v2
- process_label() for COCO panoptic dataset
- Updated
annotation_import/pdf.ipynb
with more examples - Added
integrations/huggingface/huggingface.ipynb
- Fixed broken links for detectron notebooks in README
- Added Dataset QueueMode during project creation in
integrations/detectron2/coco_object.ipynb
- Removed metadata and updated ontology in
annotation_import/text.ipynb
- Removed confidence scores in
annotation_import/image.ipynb
- Updated custom embedding tutorial links in
basics/data_row_metadata.ipynb
- New method
Project.task_queues()
to obtain the task queues for a project. - New method
Project.move_data_rows_to_task_queue()
for moving data rows to a specified task queue. - Added more descriptive error messages for metadata operations
- Added
Task.errors_url
for async tasks that return errors as separate file (e.g.export_v2
) - Upsert data rows to model runs using global keys
- Updated
ProjectExportParams.labels
toProjectExportParams.label_details
- Removed
media_attributes
fromDataRowParams
- Added deprecation warnings for
LabelList
and removed its usage - Removed unused arguments in
Project.export_v2
andModelRun.export_v2
- In
Project.label_generator()
, we now filter skipped labels for project with videos
- Fixed
examples/label_export/images.ipynb
notebook metadata - Removed unused
lb_serializer
imports - Removed uuid generation in NDJson annotation payloads, as it is now optional
- Removed custom embeddings usage in
examples/basics/data_row_metadata.ipynb
- New notebook
examples/basics/custom_embeddings.ipynb
for custom embeddings - Updated
examples/annotation_import/text.ipynb
to useTextData
and specify Text media type
- All imports are available via
import labelbox as lb
andimport labelbox.types as lb_types
. - Attachment_name support to create_attachment()
LabelImport.create_from_objects()
,MALPredictionImport.create_from_objects()
,MEAPredictionImport.create_from_objects()
,Project.upload_annotations()
,ModelRun.add_predictions()
now support Python Types for annotations.
- Removed NDJsonConverter from example notebooks
- Simplified imports in all notebooks
- Fixed nested classification in examples/annotation_import/image.ipynb
- Ontology (instructions --> name)
- New
last_activity_start
param toproject.export_labels()
for filtering which labels are exported. See docstring for more on how this works.
- Rename
Classification.instructions
toClassification.name
- Retry connection timeouts
confidence
is now optional for TextEntity
confidence
attribute is now supported for TextEntity and Line predictions
- Retry 520 errors when uploading files
- Added
get_by_name()
method to MetadataOntology object to access both custom and reserved metadata by name. - Added support for adding metadata by name when creating datarows using
DataRowMetadataOntology.bulk_upsert()
. - Added support for adding metadata by name when creating datarows using
Dataset.create_data_rows()
,Dataset.create_data_rows_sync()
, andDataset.create_data_row()
. - Example notebooks for auto metrics in models
Dataset.create_data_rows()
max limit of DataRows increased to 150,000- Improved error handling for invalid annotation import content
- String metadata can now be 1024 characters long (from 500)
- Broken urls in detectron notebook
- Fixed where batch creation limit was still limiting # of data rows. SDK should now support creating batches with up to 100k data rows
- Added SDK support for creating batches with up to 100k data rows
- Added optional media_type to
client.create_ontology_from_feature_schemas()
andclient.create_ontology()
- String representation of
DbObject
subclasses are now formatted
- Added
HTML
Enum toMediaType
.HTML
is introduced as a new asset type in Labelbox. - Added
PaginatedCollection.get_one()
andPaginatedCollection.get_many()
to provide easy functions to fetch single and bulk instances of data for any function returning aPaginatedCollection
. E.g.data_rows = dataset.data_rows().get_many(10)
- Added a validator under
ScalarMetric
to validate metric names against reserved metric names
- In
iou.miou_metric()
andiou.feature_miou_metric
, iou metric renamed ascustom_iou
- Added
client.clear_global_keys()
to remove global keys from their associated data rows - Added a new attribute
confidence
toAnnotationObject
andClassificationAnswer
for Model Error Analysis
- Fixed
project.create_batch()
to work with both data_row_ids and data_row objects
- Added step to
project.create_batch()
to wait for data rows to finish processing
- Running
project.setup_editor()
multiple times no longer resets the ontology, and instead raises an error if the editor is already set up for the project
- create_data_rows, create_data_rows_sync, create_data_row, and update data rows all accept the new data row input format for row data
- create_data_row now accepts an attachment parameter to be consistent with create_data_rows
- Conversational text data rows will be uploaded to a json file automatically on the backend to reduce the amount of i/o required in the SDK.
- Added new base
Slice
Entity/DbObject andCatalogSlice
class - Added
client.get_catalog_slice(id)
to fetch a CatalogSlice by ID - Added
slice.get_data_row_ids()
to fetch data row ids of the slice - Add deprecation warning for queue_mode == QueueMode.Dataset when creating a new project.
- Add deprecation warning for LPOs.
- Default behavior for metrics to not include subclasses in the calculation.
- Polygon extraction from masks creating invalid polygons. This would cause issues in the coco converter.
- Added warning for upcoming change in default project queue_mode setting
- Added notebook example for importing Conversational Text annotations using Model-Assisted Labeling
- Updated QueueMode enum to support new value for QueueMode.Batch =
BATCH
. - Task.failed_data_rows is now a property
- Fixed Task.wait_till_done() showing warning message for every completed task, instead of only warning when task has errors
- Fixed error on dataset creation step in examples/annotation_import/video.ipynb notebook
- Added deprecation warning for missing
media_type
increate_project
inClient
.
- Updated docs for deprecated methods
_update_queue_mode
andget_queue_mode
inProject
- Use the
queue_mode
attribute inProject
to get and set the queue mode instead - For more information, visit https://docs.labelbox.com/reference/migrating-to-workflows#upcoming-changes
- Use the
- Updated
project.export_labels
to support filtering by start/end time formats "YYYY-MM-DD" and "YYYY-MM-DD hh:mm:ss"
- Removed
client.get_data_rows_for_global_keys
until further notice
- Global Keys for data rows
- Assign global keys to a data row with
client.assign_global_keys_to_data_rows
- Get data rows using global keys with
client.get_data_row_ids_for_global_keys
andclient.get_data_rows_for_global_keys
- Assign global keys to a data row with
- Project Creation
- Introduces
Project.queue_mode
as an optional parameter when creating projects
- Introduces
MEAToMALPredictionImport
class- This allows users to use predictions stored in Models for MAL
Task.wait_till_done
now shows a warning if task has failed
- Increase scalar metric value limit to 100m
- Added deprecation warnings when updating project
queue_mode
- Fix bug in
feature_confusion_matrix
andconfusion_matrix
causing FPs and FNs to be capped at 1 when there were no matching annotations
- Support for document (pdf) de/serialization from exports
- Use the
LBV1Converter.serialize()
andLBV1Converter.deserialize()
methods
- Use the
- Support for document (pdf) de/serialization for imports
- Use the
NDJsonConverter.serialize()
andNDJsonConverter.deserialize()
methods
- Use the
ModelRun.get_config()
- Modifies get_config to return un-nested Model Run config
ModelRun.update_config()
- Updates model run training metadata
ModelRun.reset_config()
- Resets model run training metadata
ModelRun.get_config()
- Fetches model run training metadata
Model.create_model_run()
- Add training metadata config as a model run creation param
Batch.delete()
which will delete an existingBatch
Batch.delete_labels()
which will delete allLabel
’s created after aProject
’s mode has been set to batch.- Note: Does not include labels that were imported via model-assisted labeling or label imports
- Support for creating model config when creating a model run
RAW_TEXT
andTEXT_FILE
attachment types to replace theTEXT
type.
- Label export will continue polling if the downloadUrl is None
- Mask downloads now have retries
- Failed
upload_data
now shows more details in the error message
- Fixed Metadata not importing with DataRows when bulk importing local files.
- Fixed COCOConverter failing for empty annotations
- Notebooks are up-to-date with examples of importing annotations without
schema_id
- Removed extra dependency causing import errors.
- Importing annotations with model assisted labeling or label imports using ontology object names instead of schemaId now possible
- In Python dictionaries, you can now use
schemaId
key orname
key for all tools, classifications, options
- In Python dictionaries, you can now use
- Labelbox's Annotation Types now support model assisted labeling or label imports using ontology object names
- Export metadata when using the following methods:
Batch.export_data_rows(include_metadata=True)
Dataset.export_data_rows(include_metadata=True)
Project.export_queued_data_rows(include_metadata=True)
VideoObjectAnnotation
hassegment_index
to group video annotations into video segments
Project.video_label_generator
. UseProject.label_generator
instead.
- Model Runs now support unassigned splits
Dataset.create_data_rows
now has the following limits:- 150,000 rows per upload without metadata
- 30,000 rows per upload with metadata
- Added
refresh_ontology()
as part of create/update/delete metadata schema functions
DataRowMetadataOntology
class now has functions to create/update/delete metadata schemacreate_schema
- Create custom metadata schemaupdate_schema
- Update name of custom metadata schemaupdate_enum_options
- Update name of an Enum option for an Enum custom metadata schemadelete_schema
- Delete custom metadata schema
ModelRun
class now hasassign_data_rows_to_split
function, which can assign aDataSplit
to a list ofDataRow
sDataset.create_data_rows()
can bulk importconversationalData
- Import for
numpy
has been adjusted to work with numpy v.1.23.0
Data Row
object now has a new field,metadata
, which returns metadata associated with data row as a list ofDataRowMetadataField
- Note: When importing Data Rows with metadata, use the existing field,
metadata_fields
- Note: When importing Data Rows with metadata, use the existing field,
Task
objects now have the following properties:errors
- fetch information about why the task failedresult
- fetch the result of the task- These are currently only compatible with data row import tasks.
- Officially added support for python 3.9
- python 3.6 is no longer officially supported
- Renamed
custom_metadata
tometadata_fields
in DataRow
Dataset.create_data_row()
andDataset.create_data_rows()
now uploads metadata to data row- Added
media_attributes
andmetadata
toBaseData
- Removed
iou
from classification metrics
Project.create_batch()
timeout increased to 180 seconds
- Projects can be created with a
media_type
- Added
media_type
attribute toProject
- New
MediaType
enumeration
- Added back the mimetype to datarow bulk uploads for orgs that require delegated access
- Ontology Classification
scope
field is only set for top level classifications
- Batches in a project can be retrieved with
project.batches()
- Added
Batch.remove_queued_data_rows()
to cancel remaining data rows in batch - Added
Batch.export_data_rows()
which returnsDataRow
s for a batch
- NDJsonConverter now supports Video bounding box annotations.
- Note: Currently does not support nested classifications.
- Note: Converting an export into Labelbox annotation types, and back to export will result in only keyframe annotations. This is to support correct import format.
batch.project()
now works
create_data_rows
andcreate_data_rows_sync
now uploads the file with a mimetype- Orgs that only allow DA uploads were getting errors when using these functions
- Added Tool object type RASTER_SEGMENTATION for Video and DICOM ontologies
- Added beta support for exporting labels from model_runs
- LBV1Converter now supports data_split key
- Classification objects now include
scope
key
- Updated notebooks
- Project.upsert_instructions now works properly for new projects.
- Remove unused rasterio dependency
- Create batches from the SDK (Beta). Learn more about batches
- Support for precision and recall metrics on Entity annotations
client.create_project
type hint added for its return type
- Removed batch MVP code
- Ability to fetch a model run with
client.get_model_run()
- Ability to fetch labels from a model run with
model_run.export_labels()
- Note: this is only Experimental. To use, client param
enable_experimental
should be set to true
- Note: this is only Experimental. To use, client param
- Ability to delete an attachment
- Logger level is no longer set to INFO
- Deprecation: Creating Dropdowns will no longer be supported after 2022-03-31
- This includes creating/adding Dropdowns to an ontology
- This includes creating/adding Dropdown Annotation Type
- For the same functionality, use Radio
- This will not affect existing Dropdowns
- Extras folder which contains useful applications using the sdk
- Addition of ResourceTag at the Organization and Project level
- Updates to the example notebooks
- EPSGTransformer now properly transforms Polygon to Polygon
- VideoData string representation now properly shows VideoData
- Updated metrics for classifications to be per-answer
- Added
from_shapely
method to create annotation types from Shapely objects - Added
start
andend
filter on the following methods
Project.export_labels()
Project.label_generator()
Project.video_label_generator()
- Improved type hinting
- Tiled Imagery annotation type
- A set of classes that support Tiled Image assets
- New demo notebook can be found here: examples/annotation_types/tiled_imagery_basics.ipynb
- Updated tiled image mal can be found here: examples/model_assisted_labeling/tiled_imagery_mal.ipynb
- Support transformations from one EPSG to another with
EPSGTransformer
class
- Supports EPSG to Pixel space transformations
- Make
TypedArray
class compatible withnumpy
versions>= 1.22.0
project.upsert_review_queue
quotas can now be in the inclusive range [0,1]- Restore support for upserting html instructions to a project
Dataset.create_data_rows()
now accepts an iterable of data row information instead of a listproject.upsert_instructions()
- now only supports pdfs since that is what the editor requires
- There was a bug that could cause this to modify the project ontology
DataRowMetadataSchema.id
useDataRowMetadataSchema.uid
insteadModelRun.delete_annotation_groups()
useModelRun.delete_model_run_data_rows()
insteadModelRun.annotation_groups()
useModelRun.model_run_data_rows()
instead
AnnotationImport.wait_until_done()
accepts ashow_progress
param. This is set toFalse
by default.- If enabled, a tqdm progress bar will indicate the import progress.
- This works for all classes that inherit from AnnotationImport:
LabelImport
,MALPredictionImport
,MEAPredictionImport
- This is not support for
BulkImportRequest
(which will eventually be replaced byMALPredictionImport
)
Option.label
andOption.value
can now be set independentlyClassificationAnswer
s now support a newkeyframe
field for videos- New `LBV1Label.media_type field. This is a placeholder for future backend changes.
- Nested checklists can have extra brackets. This would cause the annotation type converter to break.
- New ontology management features
- Query for ontologies by name with
client.get_ontologies()
or by id usingclient.get_ontology()
- Query for feature schemas by name with
client.get_feature_schemas()
or id usingclient.get_feature_schema()
- Create feature schemas with
client.create_feature_schemas()
- Create ontologies from normalized ontology data with
client.create_ontology()
- Create ontologies from feature schemas with
client.create_ontology_from_feature_schemas()
- Set up a project from an existing ontology with
project.setup_edior()
- Added new
FeatureSchema
entity
- Query for ontologies by name with
- Add support for new queue modes
- Send batches of data directly to a project queue with
project.queue()
- Remove items from a project queue with
project.dequeue()
- Query for and toggle the queue mode
- Send batches of data directly to a project queue with
ModelRun.upsert_data_rows()
- Add data rows to a model run without also attaching labels
OperationNotAllowedException
- raised when users hit resource limits or are not allowed to use a particular operation
ModelRun.upsert_labels()
- Blocks until the upsert job is complete. Error messages have been improved
Organization.invite_user()
andOrganization.invite_limit()
are no longer experimentalAnnotationGroup
was renamed toModelRunDataRow
ModelRun.delete_annotation_groups()
was renamed toModelRun.delete_model_run_data_rows()
ModelRun.annotation_groups()
was renamed toModelRun.model_run_data_rows()
DataRowMetadataField
no longer relies on pydantic for field validation and coercion- This prevents unintended type coercions from occurring
- Search for data row ids from external ids without specifying a dataset
client.get_data_row_ids_for_external_ids()
- Support for numeric metadata type
- The following
DataRowMetadataOntology
fields were renamed:all_fields
->fields
all_fields_id_index
->fields_by_id
reserved_id_index
->reserved_by_id
reserved_name_index
->reserved_by_name
custom_id_index
->custom_by_id
custom_name_index
->custom_by_name
- Fix import error that appears when exporting labels
- Bulk export metadata with
DataRowMetadataOntology.bulk_export()
- Add docstring examples of annotation types and a few helper methods
- Update metadata notebook under examples/basics to include bulk_export.
- Allow color to be a single integer when constructing Mask objects
- Allow users to pass int arrays to RasterData and attempt coercion to uint8
- data_row.metadata was removed in favor of bulk exports.
- Diagnostics custom metrics
- Metric annotation types
- Update ndjson converter to be compatible with metric types
- Metric library for computing confusion matrix metrics and iou
- Demo notebooks under
examples/diagnostics
- COCO Converter
- Detectron2 example integration notebooks
- Iam validation exception message
- New
IAMIntegration
entity Client.create_dataset()
compatibility with delegated accessOrganization.get_iam_integrations()
to list all integrations available to an orgOrganization.get_default_iam_integration()
to only get the defaault iam integration
Dataset.create_data_rows_sync()
for synchronous bulk uploads of data rowsModel.delete()
,ModelRun.delete()
, andModelRun.delete_annotation_groups()
to Clean up models, model runs, and annotation groups.
- Increased timeout for label exports since projects with many segmentation masks weren't finishing quickly enough.
- Resolved issue with
create_data_rows()
was not working on amazon linux
- List
BulkImportRequest
s for a project withProject.bulk_import_requests()
- Improvemens to
Dataset.create_data_rows()
- Add attachments when bulk importing data rows
- Provide external ids when creating data rows from local files
- Get more informative error messages when the api rejects an import
- Bug causing
project.label_generator()
to fail when projects had benchmarks
- Support for new HTML attachment type
- Delete Bulk Import Requests with
BulkImportRequest.delete()
- Updated MEAPredictionImport class to use latest grapqhql endpoints
- Issue with inferring text type from export
- Annotation types
- A set of python objects for working with labelbox data
- Creates a standard interface for both exports and imports
- See example notebooks on how to use under examples/annotation_types
- Note that these types are not yet supported for tiled imagery
- MEA Support
- Beta MEA users can now just use the latest SDK release
- Metadata support
- New metadata features are now fully supported by the SDK
- Easier export
project.export_labels()
accepts a boolean indicating whether or not to download the result- Create annotation objects directly from exports with
project.label_generator()
orproject.video_label_generator()
project.video_label_generator()
asynchronously fetches video annotations
- Retry logic on data uploads
- Bulk creation of data rows will be more reliable
- Datasets
- Determine the number of data rows just by calling
dataset.row_count
. - Updated threading logic in create_data_rows() to make it compatible with aws lambdas
- Determine the number of data rows just by calling
- Ontology
OntologyBuilder
,Classification
,Option
, andTool
can now be imported fromlabelbox
instead oflabelbox.schema.ontology
- Deprecated:
project.reviews()
project.create_prediction()
project.create_prediction_model()
project.create_label()
Project.predictions()
Project.active_prediction_model
data_row.predictions
PredictionModel
Prediction
- Replaced:
data_row.metadata()
usedata_row.attachments()
insteaddata_row.create_metadata()
usedata_row.create_attachments()
insteadAssetMetadata
useAssetAttachment
instead
- Support derived classes of ontology objects when using
from_dict
- Notebooks:
- Video export bug where the code would fail if the exported projects had tools other than bounding boxes
- MAL demos were broken due to an image download failing.
- Data processing dependencies are not installed by default to for users that only want client functionality.
- To install all dependencies required for the data modules (annotation types and mea metric calculation) use
pip install labelbox[data]
- Decrease wait time between updates for
BulkImportRequest.wait_until_done()
. - Organization is no longer used to create the LFO in
Project.setup()
- Geometry.raster now has a consistent interface and improved functionality
- renamed schema_id to feature_schema_id in the
FeatureSchema
class Mask
objects now useMaskData
to represent segmentation masks instead ofImageData
- Rename
data
property of TextData, ImageData, and VideoData types tovalue
. - Decrease wait time between updates for
BulkImportRequest.wait_until_done()
- Organization is no longer used to create the LFO in
Project.setup()
- Model diagnostics notebooks
- Minor annotation type improvements
- Annotation types
- A set of python objects for working with labelbox data
- Creates a standard interface for both exports and imports
- See example notebooks on how to use under examples/annotation_types
- Note that these types are not yet supported for tiled imagery
- MEA Support
- Beta MEA users can now just use the latest SDK release
- Metadata support
- New metadata features are now fully supported by the SDK
- Easier export
project.export_labels()
accepts a boolean indicating whether or not to download the result- Create annotation objects directly from exports with
project.label_generator()
orproject.video_label_generator()
project.video_label_generator()
asynchronously fetches video annotations
- Retry logic on data uploads
- Bulk creation of data rows will be more reliable
- Datasets
- Determine the number of data rows just by calling
dataset.row_count
. - Updated threading logic in create_data_rows() to make it compatible with aws lambdas
- Determine the number of data rows just by calling
- Ontology
OntologyBuilder
,Classification
,Option
, andTool
can now be imported fromlabelbox
instead oflabelbox.schema.ontology
- Deprecated:
project.reviews()
project.create_prediction()
project.create_prediction_model()
project.create_label()
Project.predictions()
Project.active_prediction_model
data_row.predictions
PredictionModel
Prediction
- Replaced:
data_row.metadata()
usedata_row.attachments()
insteaddata_row.create_metadata()
usedata_row.create_attachments()
insteadAssetMetadata
useAssetAttachment
instead
- Support derived classes of ontology objects when using
from_dict
- Notebooks:
- Video export bug where the code would fail if the exported projects had tools other than bounding boxes
- MAL demos were broken due to an image download failing.
- Data processing dependencies are not installed by default to for users that only want client functionality.
- To install all dependencies required for the data modules (annotation types and mea metric calculation) use
pip install labelbox[data]
- No longer convert
ModelRun.created_by_id
to cuid on construction of aModelRun
.- This was causing queries for ModelRuns to fail.
- Update
AnnotationGroup
to expect labelId to be a cuid instead of uuid. - Update
datarow_miou
to support masks with multiple classes in them.
- Added
dataset.export_data_rows()
which returns allDataRows
for aDataset
.
ModelRun.annotation_groups()
to fetch data rows and label information for a model run
- Upated
create_mask_ndjson
helper function inimage_mal.ipynb
to use the color arg instead of a hardcoded color.
- asset_metadata is now deprecated and has been replaced with asset_attachments
AssetAttachment
replacesAssetMetadata
( see definition for updated attribute names )- Use
DataRow.attachments()
instead ofDataRow.metadata()
- Use
DataRow.create_attachment()
instead ofDataRow.create_metadata()
- Update pydantic version
- Added new
Model
and 'ModelRun` entities - Update client to support creating and querying for
Model
s - Implement new prediction import pipeline to support both MAL and MEA
- Added notebook to demonstrate how to use MEA
- Added
datarow_miou
for calculating datarow level iou scores
- MAL validation no longer raises exception when NER tool has same start and end location
DataRow
now has amedia_attributes
fieldDataRow
s can now be looked up fromLabelingParameterOverride
sProject.export_queued_data_rows
to export all data rows in a queue for a project at once
- User management
- Query for remaining invites and users available to an organization
- Set and update organization roles
- Set / update / revoke project role
- Delete users from organization
- Example notebook added under examples/basics
- Issues and comments export
- Bulk export issues and comments. See
Project.export_issues
- Bulk export issues and comments. See
- MAL on Tiled Imagery
- Example notebook added under examples/model_assisted_labeling
Dataset.create_data_rows
now allows users to upload tms imagery
- Cleanup and add additional example notebooks
- Improved README for SDK and examples
- Easier to retrieve per annotation
BulkImportRequest
status, errors, and inputs- See
BulkImportRequest.errors
,BulkImportRequest.statuses
,BulkImportRequest.inputs
for more information
- See
- Ontology builder defaults to None for missing fields instead of empty lists
- MAL validation added extra fields to subclasses
- Example notebooks
Dataset.data_row_for_external_id
No longer throwsResourceNotFoundError
when there are duplicates- Improved doc strings
- OntologyBuilder for making project setup easier
- Now supports
IMAGE_OVERLAY
metadata - Webhooks for review topics added
- Upload project instructions with
Project.upsert_instructions
- User input validation
- MAL validity is now checked client side for faster feedback
- type and value checks added in a few places
- Increase query timeout
- Retry 502s
- SDK version added to request headers
- 2.4.8 was broken for > Python 3.6
- include new
Project.enable_model_assisted_labeling
method for turning on model-assisted labeling
- fix failing
next
call Labelbox#74
Ontology
schema for interacting with ontologies and their schema nodes
- fix failing
create_metadata
calls
- retry capabilities for common flaky API failures
- protection against improper types passed into
Project.upload_anntations
- pass thru API error messages when possible
BulkImportRequest
data typeProject.upload_annotation
supports uploading via a local ndjson file, url, or a iterable of annotations
Client.upload_data
will now pass the correctcontent-length
when uploading data.
Dataset.create_data_row
andDataset.create_data_rows
will now upload with content type to ensure the Labelbox editor can show videos.
Prediction
andPredictionModel
data types.
Client.execute
now automatically extracts the 'data' value from the returneddict
. This breaks existing code that directly uses theClient.execute
method.- Major code reorganization, naming and test improvements.
Label.seconds_to_label
field value is now optional when creating aLabel
. Default value is 0.0.
Project.upsert_review_queue
method.Project.extend_reservations
method.Label.created_by
relationship (To-One User).- Changelog.
Dataset.create_data_row
upload of local file data.
Changelog not maintained before version 2.2.