- fix
get integrations
- update
reinfer.io
urls toreinfer.dev
- fix validation when providing property filter as json
- add stop after on
get comments
- Round trip
field_id
- Add
ixp
dataset flag - rename label def description to instructions
- Default value for PropertyValue
- Fix selection index issue on custom label trend reports
- Fix attachments not getting uploaded when syncing comments
- add custom label trend report
- Add validation to dataset
--stats
- fix issue when adding configs from url
- Add dataset flags to
create-dataset
- Add
parse aic-classification-csv
- Round trip
_entity_def_flags
- Add
get keyed sync states
- Add
delete keyed sync states
- Strip invalid windows characters when saving attachments
- Don't re-download attachments that already exist locally
- Add
only-with-attachment
filter on get comments - Retry when putting comments
- Add ability to get email by id
- Add ability to upload attachment content for comments
- fix bug where comment's would not be printed when downloading attachments
- Add ability to randomly sample with
get comments
- Add
config parse-from-url
command for parsing configuration from a URL - Add ability to download attachments for comments
- Increase default http timeout to 120s
- Add
--resume-on-error
flag when creating annotations - Remove
--use-moon-forms
flag - Add
--resume-on-error
flag when creating comments / emails
- Add general fields to
create datasets
- Alow users to filter get datasets by sources that they reference
- Bucket statistics now provide either an exact count of raw emails up to a predefined upper limit, or a lower bound if the count exceeds this limit.
- The
create bucket
flag--transform-tag
is now removed.
- Fixes issue when getting streams that have multiple filters on single user property
- Fixes issue where upper case file names would not be matched in
parse
- Reduce batch size when deleting comment batches
- Support attachment type filters
- support getting stats for
get buckets
- Show usage on
get quotas
- BREAKING: the
--context
option is now required. Users need to opt out if they don't want to provide this for every command - BREAKING: the
--context
option is always a required field for internal users
- Add
get emails
- Added support for
--auto-increase-up-to
when creating quotas. - Support spans format for entities
- Fix a bug where some label annotations cannot be applied
- minor api improvements
- Add integration commands
- Fix a bug where stream responses were not correctly parsed
- Fix a bug where streams were not correctly advanced
- Add messages filters
- Fixes
required
field error when interacting with datasets
- Reduce batch size for parse emls
- Add get audit events command
- Add ability to parse .emls
- Add more stream stats
- Fix url used for fetching streams
- Return
is_end_sequence
on stream fetch - Make
transform_tag
optional oncreate bucket
- Retry
put_emails
requests - Add
get stream-stats
to expose and compare model validation
- Add ability to get dataset stats
- Show Global Permissions in
get users
- Upgrade
ordered-float
version, which is exposed in the public crate api. - Add ability to filter users by project and permission
- Add feature to parse unicode msg files
- Add create streams command
- Show source statistics in table when getting sources
- Add ability to filter on user properties when getting comments
- Add comment id to document object in api
- Add label filter when downloading comments with predictions
- Retry requests on request error
- Retry TOO_MANY_REQUESTS
- Support markup in signatures
- Fix bug where annotations may have been uploaded before comments, causing a failure
- Always retry on connection issues
- Upload annotations in parallel
- Add attachments to
sync-raw-email
- Add command to list quotas for current tenant
- Show correct statistics when downloading comments
- Add
sync-raw-emails
to api
- Add support for markup on comments
- Add a warning for UiPath cloud users when an operation will charge ai units
- Add user property filters to the query api
- Add recent as an option for the query api
- Skip serialization of continuation on
None
- Add
no-charge
flag to create comment/email commands - Add comment and label filters to
get_statistics
- Add timeseries to
get_statistics
- Add
query_dataset
to api
re get comments
returns label properties
re create quota
to set a quota in a tenant
- Rename "triggers" to "streams" following the rename in the API
- Removed semantic url joins to support deployments within a subdirectory
- Added functionality to use moon forms both in
LabelDef
s and inAnnotatedComments
s
re get comments
will now return auto-thresholds for predicted labels if provided with a--model-version
parameterre update users
for bulk user permission updates- Option to send welcome email on create user
re update source
can now update the source's transform tagre get source
andre get sources
will show bucket name if exists.re get comments
can now download predictions within a given timerange
- Display project ids when listing projects
- Add support for getting or deleting a single user
- Upgrade all dependencies to their latest released version
- Enable retry logic for uploading annotations
- Add support for optionally setting a transform tag on a source
- Renames organisation -> project throughout, including in the CLI command line arguments for consistency with the new API
re create dataset
will default to sentiment disabled if--has-sentiment
is not provided.- Changed
--source-type
parameter to--kind
.
re create trigger-exception
to tag a comment exception within a trigger.
- Fix serialization of sources after api change of internal parameter
_kind
.
- Fixes serialization issue where statistics expected
usize
notf64
- Add an optional
--source-type
parameter tocreate source
. Only for internal use.
- New
re create annotations
command for uploading annotations (labels and entities) to existing comments in a dataset, without having to usere create comments
. This avoids potentially - and unknowingly - modifying the underlying comments in the source. - Add support to
--force
delete projects with existing resources. - Print comment
uid
when a comment upload fails due to bad annotations.
- Failure when uploading comments with thread properties
- Added support for new labellings api. Old jsonl files can still be uploaded with
re
but newly downloaded jsonl files will be in the new format.
- Deserialize thread properties when downloading comments for a dataset (the
-d dataset
option forre get comments
). This limitation exists as only the /labellings API route returns thread properties. - Added
re config get-token [context]
which dumps the auth token for the current or a different, given context. - Added CRUD commands for projects.
- Added option for
--label-groups
inre create dataset
.
- All API resources with floats now use
ordered_float::NotNan
- A new top level flag
-o/--output
has been added. This replaces all previous-o/--output
flags in there get *
subcommands. - The
EntityDefs
wrapper has been removed in favour ofVec<EntityDef>
. This impacts theNewDataset
andDataset
structs EntityDef
has added fields to accurately reflect the api return type- Added
metadata
field to theLabel
struct
- More public types implement
Serialize
,Eq
andHash
for downstream use.
get comment
: get a single comment by source and id- Created or updated resources will be returned via stdout. The format of the output can be changed with the global
-o/--output
flag.- This excludes creation of the
comments
andemails
resources.
- This excludes creation of the
- Added
entity_defs
andlabel_defs
to thereinfer_api::Dataset
struct, andcreate dataset
command - Added
LabelDef
,NewLabelDef
,NewEntity
and associated structs
NewDataset
'sentity_defs
field is now anOption
for consistency
- When uploading annotated comments, empty lists of assigned / dismissed labels are serialized in the request. Previously empty lists were skipped which meant it was not possible to remove labellings (N.B. the API distinguishes between missing field -- labellings are unmodified -- or and empty list -- labellings are removed).
- All
*Id
types now implementHash
,PartialEq
, andEq
NewDataset
andNewSource
now implementDefault
update source
: update an existing sourceupdate dataset
: update an existing dataset
- The
create bucket
flag--transform-tag
is now required.
delete bulk
: slight performance optimisations.create dataset
: Accept an optional--model-family
and--copy-annotations-from
for the new dataset.
delete bulk
: For deleting multiple comments by source id. When run with--include-annotated=false
, skips annotated comments. When run with--include-annotated=true
, deletes all comments.
- Add support for using an HTTP proxy for all requests. The proxy configuration is saved as part of the context.
Additionally, the proxy can be overridden / specified as a one off using a global command line argument
--proxy https://proxy.example
(e.g. similar to--endpoint
).
-
Updated error types and handling throughout. This changes the publicly visible
reinfer_client::errors
module. -
The
-e
flag used to pass in entity kinds at dataset creation has been re-purposed. One now needs to pass in ajson
object containing the correspondingEntityDef
to be added to the new dataset. Example:
re create dataset org/example-dataset -s org/example-source --has-sentiment false -e '[{"name":"trainable_org","title":"Custom Organisation","inherits_from":["org"],"trainable":true}','{"name":"non_trainable_person","title":"Basic Person","inherits_from":["person"],"trainable":false}]'
delete comments
: For deleting multiple comments by comment id from a source.
create bucket
: Accept an optional--transform-tag
value for the new bucket.get buckets
: Display transform tag for retrieved data.
create source
: Improve error message when specifying an invalid source name.- Commands which make multiple API requests will now retry on timeout for requests after the first.
- Bump all dependencies to latest versions.
create comments
: Add check for duplicate comment IDs before attempting upload of a comment file. Use--allow-duplicates
to skip this check.
create comments
,get comments
,create emails
: Replace--progress
flag with--no-progress
.create comments
: Stop overwriting existing comments by default. Use--overwrite
to use the previous behaviour.
This release is identical to 0.3.1, but was republished due to a packaging bug.
- Fixes downloading predictions for comments in sentimentless datasets (#6).