Back

Resnet-32

ResNet-34 is a convolutional neural network (CNN) architecture that is part of the ResNet (Residual Network) family, introduced in the groundbreaking 2015 paper "Deep Residual Learning for Image Recognition" .

Image classification using ResNet 34

In CV, image classification is a classic problem statement. In this example, we will benchmark multiple explainability techniques.

Input

Prompt* String

how many s in mississippi. think step by step
Input text for the model
temperature

*

number

(minimum: 0, maximum: 1)

0.7
Controls randomness. Lower values make the model more deterministic, higher values make it more random. 

Default: 0.7
top_p

*

number

(minimum: 0, maximum: 1)

0.95
Controls randomness. Lower values make the model more deterministic, higher values make it more random. 

Default: 0.7
max_tokens

*

integer

(maximum: 1)

512
Maximum number of tokens to generate

Default: 0.7

Output

ResNet output using DL backtrace

A100

ResNet output using GradCAM

A100

ResNet output using Integrated Gradients

A100

ResNet output using Smoothgrad

A100

We used ResNet 34 Model which has :Basic blocks (two 3x3 convolutions per block).The exact block configuration of ResNet-34:3 blocks (64 filters), 4 blocks (128 filters), 6 blocks (256 filters), 3 blocks (512 filters). Downsampling at the start of stages 3, 4, and 5.Global average pooling and a fully connected layer for classification.

ResNet-34 is a convolutional neural network (CNN) architecture that is part of the ResNet (Residual Network) family, introduced in the groundbreaking 2015 paper "Deep Residual Learning for Image Recognition" by He et al. The ResNet family was designed to address the vanishing gradient problem that occurs in deep neural networks, enabling the successful training of very deep networks.

Key Features of ResNet-34

  1. Depth:
    • ResNet-34 contains 34 layers, including convolutional layers, pooling layers, and fully connected layers.
  2. Residual Connections:
    • Introduced to bypass one or more layers, allowing the model to learn residual mappings instead of direct mappings.
    • Helps mitigate the vanishing gradient problem, enabling deep architectures to converge during training.
    • A residual block is mathematically expressed as:y=F(x,{Wi})+x\mathbf{y} = \mathcal{F}(\mathbf{x}, \{W_i\}) + \mathbf{x}y=F(x,{Wi​})+xwhere F\mathcal{F}F is the residual mapping to be learned and x\mathbf{x}x is the input.
  3. Building Blocks:
    • The network uses basic residual blocks with 2 stacked convolutional layers each.
    • Batch normalization and ReLU activation functions are applied after every convolution.
  4. Efficient Depth:
    • At 34 layers, ResNet-34 is not as deep as ResNet-50 or ResNet-101, but it still provides significant feature extraction capability, making it suitable for moderate computational resources.

Architecture of ResNet-34

  1. Input Layer:
    • Initial convolution with a 7×77 \times 77×7 kernel, stride 2, and 64 output channels, followed by a 3×33 \times 33×3 max-pooling layer with stride 2.
  2. Residual Blocks:
    • Conv2_x: 3 residual blocks, each containing 2 convolutional layers (3×33 \times 33×3).
    • Conv3_x: 4 residual blocks, each with 2 convolutional layers.
    • Conv4_x: 6 residual blocks, each with 2 convolutional layers.
    • Conv5_x: 3 residual blocks, each with 2 convolutional layers.
  3. Output Layer:
    • Global average pooling (reduces the feature map to a single vector).
    • Fully connected (dense) layer with the number of neurons equal to the number of output classes.
    • Softmax activation function for classification tasks.

Parameter Details

  • Total Parameters: Approximately 21.8 million.
  • Layers:
    • 7×77 \times 77×7 convolution: 1 layer.
    • Residual blocks: 16 blocks with 2 layers each.
    • Fully connected: 1 layer.

Advantages of ResNet-34

  1. Ease of Training:
    • Residual connections allow gradients to flow through the network more easily, avoiding vanishing gradients.
  2. Performance:
    • Demonstrates excellent accuracy on standard datasets like ImageNet, providing a good trade-off between depth and computational efficiency.
  3. Versatility:
    • Can be fine-tuned for tasks like object detection, segmentation, and feature extraction.

Use Cases

  1. Image Classification:
    • ResNet-34 is widely used for classifying images into various categories, particularly on datasets like ImageNet.
  2. Feature Extraction:
    • Pre-trained versions of ResNet-34 (e.g., on ImageNet) are often used as feature extractors in transfer learning.
  3. Medical Imaging:
    • Applied to classify medical images (e.g., X-rays, MRIs) due to its ability to capture fine details.
  4. Object Detection and Segmentation:
    • Acts as a backbone in detection frameworks like Faster R-CNN and Mask R-CNN.

Limitations

  1. Computational Resources:
    • Though more efficient than deeper models, ResNet-34 still requires a decent amount of computational power.
  2. Overfitting:
    • Without adequate regularization or sufficient training data, ResNet-34 can overfit, especially on small datasets.

Comparison with Other ResNet Variants

ModelNumber of LayersParameters (Million)Use CaseResNet-181811.7Lightweight, resource-constrained tasksResNet-343421.8Balanced depth and efficiencyResNet-505025.6High accuracy, slightly higher resource usageResNet-10110144.5Very deep, for complex problems

Conclusion

ResNet-34 strikes a balance between performance and computational efficiency, making it a popular choice for a wide range of deep learning applications. Its residual connections ensure stable training and scalability, maintaining its relevance in modern AI research and applications.

Is Explainability critical for your 'AI' solutions?

Schedule a demo with our team to understand how AryaXAI can make your mission-critical 'AI' acceptable and aligned with all your stakeholders.