Back

Llama-3.2-1B-Instruct

The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out).

Text completion

(Form Summary) In this example, we are running 'Sentiment analysis' using Llama 3.2 1B instruct. The prompt is provided with the statement to classify 'Negative' or 'Positive' sentiment

Input

Prompt* String

how many s in mississippi. think step by step
Input text for the model
temperature

*

number

(minimum: 0, maximum: 1)

0.7
Controls randomness. Lower values make the model more deterministic, higher values make it more random. 

Default: 0.7
top_p

*

number

(minimum: 0, maximum: 1)

0.95
Controls randomness. Lower values make the model more deterministic, higher values make it more random. 

Default: 0.7
max_tokens

*

integer

(maximum: 1)

512
Maximum number of tokens to generate

Default: 0.7

Input

Prompt* String

Classify the text into neutral,negative, or positive.

Text: The product mentioned fullcoverage, but my claim wasn't accepted, and the experience was not something Iwould recommend to others.

Sentiment of the text is:
Input text for the model
temperature

*

number

(minimum: 0, maximum: 1)

0.7
Controls randomness. Lower values make the model more deterministic, higher values make it more random.
Default: 0.7
top_p

*

number

(minimum: 0, maximum: 1)

0.95
Classify the text into neutral, negative, or positive.
Text: The product mentioned fullcoverage, but my claim wasn't accepted,
and the experience was not something I would recommend to others.


Sentiment of the text is:
Default: 0.7
max_tokens

*

integer

(maximum: 1)

512
Maximum number of tokens to generate
Default: 0.7

Output

Sentiment-analysis

GPU: A1 100
1.5mins

Response

Negative.

Reason:

The text expresses dissatisfaction with the product and the experience, stating that the claim was not accepted and the experience was not something the author would recommend. This indicates a negative sentiment. Note: The text does not contain any explicit negative

Explainability:

Relevance mapping for input token

Code Completion

GPU: A1 100
1.5mins

Fibonacci sequence is a series of numbers in which each number is thesum of the two preceding ones, usually starting with 0 and 1.

### Fibonacci Function

Here's a Python function thatcalculates the n-thFibonacci number using memoization to improve performance.

```python

def fibonacci(n, memo={}):
        """
        Calculate the n-th Fibonacci number.
        Args:
        n (int): The position of the Fibonacci number to calculate.
        memo (dict): A dictionary to store previously calculated Fibonacci numbers.
        Returns:
        int: The n-th Fibonacci number.
        """
        if n <= 0:
            return 0
        elif n == 1:
            return 1
        elif n not in memo:
            memo[n] = fibonacci(n-1, memo) + fibonacci(n-2, memo)
        return memo[n]
    # Example usage:
    print(fibonacci(10))  # Output: 55

Explainability

Model Information

The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.

Model Developer: Meta

Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.

Llama 3.2 Model Family: Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.

Model Release Date: Sept 25, 2024

Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.

License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).

Feedback: Instructions on how to provide feedback or comments on the model can be found in the Llama Models README. For more technical information about generation parameters and recipes for how to use Llama 3.2 in applications, please go here.

Is Explainability critical for your 'AI' solutions?

Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.