Input to OpenAI class.

Hierarchy

Implemented by

Properties

batchSize: number

Batch size to use when passing multiple documents to generate

frequencyPenalty: number

Penalizes repeated tokens according to frequency

modelName: string

Model name to use

n: number

Number of completions to generate for each prompt

presencePenalty: number

Penalizes repeated tokens

streaming: boolean

Whether to stream the results or not. Enabling disables tokenUsage reporting

temperature: number

Sampling temperature to use

topP: number

Total probability mass of tokens to consider at each step

bestOf?: number

Generates bestOf completions server side and returns the "best"

logitBias?: Record<string, number>

Dictionary used to adjust the probability of specific tokens being generated

maxTokens?: number

Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.

modelKwargs?: Record<string, any>

Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.

openAIApiKey?: string

API key to use when making requests to OpenAI. Defaults to the value of OPENAI_API_KEY environment variable.

stop?: string[]

List of stop words to use when generating

timeout?: number

Timeout to use when making requests to OpenAI.

user?: string

Unique string identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

Generated using TypeDoc