Categories

Versions

Release notes for the Generative Models extension, version 2.1

Released: July 8, 2024

New operators

  • Send Conversation (OpenAI): this can be used for situations where complete conversations should be sent to provide context for the previous conversation. The operator expects a data set as input with at least two columns: one for the role of each message and for the content of each message. The complete conversation is then sent to OpenAI and the answer will be appended at the end of the data set.

  • Conversational (History): like the previous one, this operator can also be used for situations where complete conversations should be used as context for a conversational text generation model. This model can use any Huggingface model similar to the regular Conversational operator.

Improvements

  • Operators for OpenAI (including the new operator Send Conversation (OpenAI)) also now support using the OpenAI models through the Microsoft Azure OpenAI service. Users will need to select the “type” of “Azure” in the supporting operators and provide a dictionary connection as input with the api_key and api_base_url of the deployed models. The deployment of finetuned models and the deletion of models and deployments is not supported and needs to be performed in the Azure OpenAI web interface. Please refer to the Azure OpenAI documentation for additional information.

  • Send Prompt (OpenAI) and Embeddings (OpenAI) now support parallel requests (default: 10) which can greatly accelerate querying for larger data sets. Based on usage limits with OpenAI users may need to reduce the number or can increase it even further.

  • Send Prompt (OpenAI) and Send Conversation (OpenAI) now also offer “gpt-4o” and “gpt-4-turbo” as predefined model selections. As before, you can specify and OpenAI model by providing the model name. You can also still provide a custom finetuned model id as well. We removed some models which are now deprecated by OpenAI. The default model for this operator now also is “gpt-4o”.

  • Send Prompt (OpenAI) and Send Conversation (OpenAI) now supports the definition of a different base_url as part of the dictionary connection. This allows for other model providers but OpenAI, e.g., the OpenAI model hosted on Microsoft Azure or even for API-equivalent model proxies.

  • Send Prompt (OpenAI) and Send Conversation (OpenAI) now calculate an estimate of the total price before starting to query the model and checks if the total price is below the limit specified as a parameter for the operator. This price limit check can be deactivated.

  • Finetune (OpenAI) now allows to deactivate the price limit check.

  • Generate Prompt now supports special functions as well as the use of a second data set which can serve as data for “shots” when used for “few shot training”. The new special functions are:

    • {{all}}: adds all values of the data row in a key-value-pair format and uses one text row for each of the data columns,

    • {{all_pipe}}: same but separates the key value pairs by a pipe symbol while keeping them in a single row,

    • {{shots_all}}: adds all rows of the second data set as key value pairs in the same format as {{all}},

    • {{shots_all_pipe}}: same with the pipe symbol format.

    The new optional target parameter is only relevant for these special commands. If set, it will move the specified column to the end of the list of key value pairs (if the target column exists in the data at all, otherwise the setting has no impact). You can also specify if the target column should be left out of the all commands above in order to ask for predictions without giving a true class away if known.

Bugs

  • Bugfix: Download Model was under specific circumstances broken because of a missing import statement.