Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare 2.0.0-beta.12 release (Part 1) #216

Merged
merged 3 commits into from
Sep 20, 2024

Conversation

joseharriaga
Copy link
Collaborator

@joseharriaga joseharriaga commented Sep 18, 2024

Features Added

  • The library now includes support for the new OpenAI o1 model family.
    • ChatCompletionOptions will automatically apply its MaxOutputTokenCount value (renamed from MaxTokens) to the new max_completion_tokens request body property
    • Usage includes a new OutputTokenDetails property with a ReasoningTokenCount value that will reflect o1 model use of this new subcategory of output tokens.
      • Note that OutputTokenCount (completion_tokens) is the sum of displayed tokens generated by the model and (when applicable) these new reasoning tokens
  • Assistants file search now includes support for RankingOptions
    • Use of the include[] query string parameter and retrieval of run step detail result content is currently only available via protocol methods
  • Added support for the Uploads API in FileClient. This Experimental feature allows uploading large files in multiple parts.
    • The feature is supported by the CreateUpload, AddUploadPart, CompleteUpload, and CancelUpload protocol methods.

Breaking Changes

  • Renamed ChatMessageContentPart's CreateTextMessageContentPart factory method to CreateTextPart.
  • Renamed ChatMessageContentPart's CreateImageMessageContentPart factory method to CreateImagePart.
  • Renamed ChatMessageContentPart's CreateRefusalMessageContentPart factory method to CreateRefusalPart.
  • Renamed ImageChatMessageContentPartDetail to ChatImageDetailLevel.
  • Removed ChatMessageContentPart's ToString overload.
  • Renamed the MaxTokens property in ChatCompletionOptions to MaxOutputTokenCount
  • Renamed properties in ChatTokenUsage:
    • InputTokens is renamed to InputTokenCount
    • OutputTokens is renamed to OutputTokenCount
    • TotalTokens is renamed to TotalTokenCount
  • Removed the common ListOrder enum from the top-level OpenAI namespace in favor of individual enums in their corresponding sub-namespace.
  • Renamed the PageSize property to PageSizeLimit.
  • Updated deletion methods to return a result object instead of a bool. Affected methods:
    • DeleteAssitant, DeleteMessage, and DeleteThread in AssistantClient.
    • DeleteVectorStore and RemoveFileFromStore in VectorStoreClient.
    • DeleteModel in ModelClient.
    • DeleteFile in FileClient.
  • Removed setters from collection properties.
  • Renamed ChatTokenLogProbabilityInfo to ChatTokenLogProbabilityDetails.
  • Renamed ChatTokenTopLogProbabilityInfo to ChatTokenTopLogProbabilityDetails.
  • Renamed the Utf8ByteValues properties of ChatTokenLogProbabilityDetails and ChatTokenTopLogProbabilityDetails to Utf8Bytes and changed their type from IReadOnlyList<int> to ReadOnlyMemory<byte>?.
  • Renamed the Start and End properties of TranscribedSegment and TranscribedWord to StartTime and EndTime.
  • Changed the type of TranscribedSegment's AverageLogProbability and NoSpeechProbability properties from double to float.
  • Changed the type of TranscribedSegment's SeekOffset property from long to int.
  • Changed the type of TranscribedSegment's TokenIds property from IReadOnlyList<long> to IReadOnlyList<int>.
  • Updated the Embedding.Vector property to the Embedding.ToFloats() method.
  • Removed the optional parameter from the constructors of VectorStoreCreationHelper, AssistantChatMessage, and ChatFunction.
  • Removed the optional purpose parameter from FileClient.GetFilesAsync and FileClient.GetFiles methods, and added overloads where purpose is required.
  • Renamed ModerationClient's ClassifyTextInput methods to ClassifyText.
  • Removed duplicated Created property from GeneratedImageCollection.

Bugs Fixed

  • Addressed an issue that caused multi-page queries of fine-tuning jobs, checkpoints, and events to fail.
  • ChatCompletionOptions can now be serialized via ModelReaderWriter.Write() prior to calling CompleteChat using the options

Other Changes

  • Added support for CancellationToken to ModelClient methods.
  • Applied the Obsolete attribute where appropriate to align with the existing deprecations in the REST API.

@joseharriaga joseharriaga merged commit 2ab1a94 into main Sep 20, 2024
1 check passed
@joseharriaga joseharriaga deleted the joseharriaga/Pre-2.0.0-beta.12 branch September 20, 2024 19:02
@bartczernicki
Copy link

@joseharriaga This part doesn't work in the latest beta nor the latest 2.1 release:

  • ChatCompletionOptions will automatically apply its MaxOutputTokenCount value (renamed from MaxTokens) to the new max_completion_tokens request body property

Using the o1-preview model paired with the latest API version, setting the MaxOutputTokenCount value errors out.

@joseharriaga
Copy link
Collaborator Author

@bartczernicki Thank you for reaching out! By any chance, are you using the Azure OpenAI service and the Azure OpenAI companion library for .NET? I'm not able to reproduce the issue, and I'm wondering if the problem is related to Azure OpenAI or the companion library instead.

If you're using the Azure OpenAI companion library, could you file the issue in the Azure SDK repo here?
🔗 https://github.com/Azure/azure-sdk-for-net

If you can include some repro steps and/or code snippets, that would be super helpful. 🙂 Thanks!

@bartczernicki
Copy link

@joseharriaga You are correct. I am using the Azure SDK library. Both are on v2.1 and I used the wrong one. Thanks for the clarification and guidance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants