Class: OpenAI
OpenAI LLM implementation
Implements
Constructors
constructor
• new OpenAI(init?
)
Parameters
Name | Type |
---|---|
init? | Partial <OpenAI > & { azure? : AzureOpenAIConfig } |
Defined in
packages/core/src/llm/LLM.ts:152
Properties
additionalChatOptions
• Optional
additionalChatOptions: Omit
<Partial
<ChatCompletionCreateParams
>, "model"
| "temperature"
| "max_tokens"
| "messages"
| "top_p"
| "streaming"
>
Defined in
packages/core/src/llm/LLM.ts:135
additionalSessionOptions
• Optional
additionalSessionOptions: Omit
<Partial
<ClientOptions
>, "apiKey"
| "timeout"
| "maxRetries"
>
Defined in
packages/core/src/llm/LLM.ts:145
apiKey
• Optional
apiKey: string
= undefined
Defined in
packages/core/src/llm/LLM.ts:141
callbackManager
• Optional
callbackManager: CallbackManager
Defined in
packages/core/src/llm/LLM.ts:150
hasStreaming
• hasStreaming: boolean
= true
Implementation of
Defined in
packages/core/src/llm/LLM.ts:128
maxRetries
• maxRetries: number
Defined in
packages/core/src/llm/LLM.ts:142
maxTokens
• Optional
maxTokens: number
Defined in
packages/core/src/llm/LLM.ts:134
model
• model: "gpt-3.5-turbo"
| "gpt-3.5-turbo-1106"
| "gpt-3.5-turbo-16k"
| "gpt-4"
| "gpt-4-32k"
| "gpt-4-1106-preview"
| "gpt-4-vision-preview"
Defined in
packages/core/src/llm/LLM.ts:131
session
• session: OpenAISession
Defined in
packages/core/src/llm/LLM.ts:144
temperature
• temperature: number
Defined in
packages/core/src/llm/LLM.ts:132
timeout
• Optional
timeout: number
Defined in
packages/core/src/llm/LLM.ts:143
topP
• topP: number
Defined in
packages/core/src/llm/LLM.ts:133
Accessors
metadata
• get
metadata(): Object
Returns
Object
Name | Type |
---|---|
contextWindow | number |
maxTokens | undefined | number |
model | "gpt-3.5-turbo" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo-16k" | "gpt-4" | "gpt-4-32k" | "gpt-4-1106-preview" | "gpt-4-vision-preview" |
temperature | number |
tokenizer | CL100K_BASE |
topP | number |
Implementation of
Defined in
packages/core/src/llm/LLM.ts:206
Methods
chat
▸ chat<T
, R
>(messages
, parentEvent?
, streaming?
): Promise
<R
>
Get a chat response from the LLM
Type parameters
Name | Type |
---|---|
T | extends undefined | boolean = undefined |
R | T extends true ? AsyncGenerator <string , void , unknown > : ChatResponse |
Parameters
Name | Type | Description |
---|---|---|
messages | ChatMessage [] | The return type of chat() and complete() are set by the "streaming" parameter being set to True. |
parentEvent? | Event | - |
streaming? | T | - |
Returns
Promise
<R
>
Implementation of
Defined in
packages/core/src/llm/LLM.ts:249
complete
▸ complete<T
, R
>(prompt
, parentEvent?
, streaming?
): Promise
<R
>
Get a prompt completion from the LLM
Type parameters
Name | Type |
---|---|
T | extends undefined | boolean = undefined |
R | T extends true ? AsyncGenerator <string , void , unknown > : ChatResponse |
Parameters
Name | Type | Description |
---|---|---|
prompt | string | the prompt to complete |
parentEvent? | Event | - |
streaming? | T | - |
Returns
Promise
<R
>
Implementation of
Defined in
packages/core/src/llm/LLM.ts:286
mapMessageType
▸ mapMessageType(messageType
): "function"
| "user"
| "assistant"
| "system"