Modelsmetameta-llama-3-8b

Base version of Llama 3, an 8 billion parameter language model from Meta.

Input

Prompt

The minimum number of tokens the model should generate as output.

The maximum number of tokens the model should generate as output.

The value used to modulate the next token probabilities.

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

Presence penalty

Frequency penalty

Prompt template. The string `{prompt}` will be substituted for the input prompt. If you want to generate dialog output, use this template as a starting point and construct the prompt string manually, leaving `prompt_template={prompt}`.

Ready to Create

Configure the model parameters in the sidebar and click Run to generate content.