Transformers documentation

ViTDet

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.3.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

This model was released on 2022-03-30 and added to Hugging Face Transformers on 2023-08-29.

ViTDet

PyTorch

Overview

The ViTDet model was proposed in Exploring Plain Vision Transformer Backbones for Object Detection by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He. VitDet leverages the plain Vision Transformer for the task of object detection.

The abstract from the paper is the following:

We explore the plain, non-hierarchical Vision Transformer (ViT) as a backbone network for object detection. This design enables the original ViT architecture to be fine-tuned for object detection without needing to redesign a hierarchical backbone for pre-training. With minimal adaptations for fine-tuning, our plain-backbone detector can achieve competitive results. Surprisingly, we observe: (i) it is sufficient to build a simple feature pyramid from a single-scale feature map (without the common FPN design) and (ii) it is sufficient to use window attention (without shifting) aided with very few cross-window propagation blocks. With plain ViT backbones pre-trained as Masked Autoencoders (MAE), our detector, named ViTDet, can compete with the previous leading methods that were all based on hierarchical backbones, reaching up to 61.3 AP_box on the COCO dataset using only ImageNet-1K pre-training. We hope our study will draw attention to research on plain-backbone detectors.

This model was contributed by nielsr. The original code can be found here.

Tips:

  • At the moment, only the backbone is available.

VitDetConfig

class transformers.VitDetConfig

< >

( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 mlp_ratio = 4 hidden_act = 'gelu' dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-06 image_size = 224 pretrain_image_size = 224 patch_size = 16 num_channels = 3 qkv_bias = True drop_path_rate = 0.0 window_block_indices = [] residual_block_indices = [] use_absolute_position_embeddings = True use_relative_position_embeddings = False window_size = 0 out_features = None out_indices = None **kwargs )

Parameters

  • hidden_size (`, defaults to 768`) — Dimension of the hidden representations.
  • num_hidden_layers (`, defaults to 12`) — Number of hidden layers in the Transformer decoder.
  • num_attention_heads (`, defaults to 12`) — Number of attention heads for each attention layer in the Transformer decoder.
  • mlp_ratio (`, defaults to 4`) — Ratio of the MLP hidden dim to the embedding dim.
  • hidden_act (`, defaults to gelu) -- The non-linear activation function (function or string) in the decoder. For example, “gelu”, “relu”, “silu”`, etc.
  • dropout_prob (`, defaults to 0.0`) — The ratio for all dropout layers.
  • initializer_range (`, defaults to 0.02`) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • layer_norm_eps (`, defaults to 1e-06`) — The epsilon used by the layer normalization layers.
  • image_size (`, defaults to 224`) — The size (resolution) of each image.
  • pretrain_image_size (int, optional, defaults to 224) — The size (resolution) of each image during pretraining.
  • patch_size (`, defaults to 16`) — The size (resolution) of each patch.
  • num_channels (`, defaults to 3`) — The number of input channels.
  • qkv_bias (`, defaults to True`) — Whether to add a bias to the queries, keys and values.
  • drop_path_rate (`, defaults to 0.0`) — Drop path rate for the patch fusion.
  • window_block_indices (list[int], optional, defaults to []) — List of indices of blocks that should have window attention instead of regular global self-attention.
  • residual_block_indices (list[int], optional, defaults to []) — List of indices of blocks that should have an extra residual block after the MLP.
  • use_absolute_position_embeddings (`, defaults to True`) — Whether to use absolute position embeddings.
  • use_relative_position_embeddings (bool, optional, defaults to False) — Whether to add relative position embeddings to the attention maps.
  • window_size (int, optional, defaults to 0) — The size of the attention window.
  • out_features (`) -- Names of the intermediate hidden states (feature maps) to return from the backbone. One of “stem”, “stage1”, “stage2”`, etc.
  • out_indices (“) — Indices of the intermediate hidden states (feature maps) to return from the backbone. Each index corresponds to one stage of the model.

This is the configuration class to store the configuration of a VitDetModel. It is used to instantiate a Vitdet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the google/vitdet-base-patch16-224

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Example:

>>> from transformers import VitDetConfig, VitDetModel

>>> # Initializing a VitDet configuration
>>> configuration = VitDetConfig()

>>> # Initializing a model (with random weights) from the configuration
>>> model = VitDetModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

VitDetModel

class transformers.VitDetModel

< >

( config: VitDetConfig )

Parameters

  • config (VitDetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

The bare Vitdet Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: torch.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None **kwargs ) BaseModelOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.Tensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using image_processor_class. See image_processor_class.__call__ for details (processor_class uses image_processor_class for processing images).
  • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail.
  • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail.
  • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple.

Returns

BaseModelOutput or tuple(torch.FloatTensor)

A BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (VitDetConfig) and inputs.

The VitDetModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

>>> from transformers import VitDetConfig, VitDetModel
>>> import torch

>>> config = VitDetConfig()
>>> model = VitDetModel(config)

>>> pixel_values = torch.randn(1, 3, 224, 224)

>>> with torch.no_grad():
...     outputs = model(pixel_values)

>>> last_hidden_states = outputs.last_hidden_state
>>> list(last_hidden_states.shape)
[1, 768, 14, 14]
Update on GitHub