Tensor Manipulation

Loading concept...

Tensor Manipulation: The Art of Shaping Data

The Big Picture: Tensors are Like LEGO Boxes

Imagine you have a magical LEGO box. Inside, you have colorful bricks arranged in neat rows and columns. Tensor manipulation is all about:

  • Picking specific bricks (indexing & slicing)
  • Finding special bricks (boolean & fancy indexing)
  • Moving bricks around (gather & scatter)
  • Looking at the same bricks differently (views & contiguity)
  • Making copies (cloning & copying)
  • Reshaping the box (shape manipulation)
  • Combining or splitting boxes (joining & splitting)
  • Boxes inside boxes (nested tensors)

Let’s explore each one!


1. Tensor Indexing and Slicing

What Is It?

Picking specific items from your tensor — like grabbing a single cookie from a tray or a row of cookies.

Simple Example

import torch

# A box with 5 candies
candies = torch.tensor([10, 20, 30, 40, 50])

# Grab the first candy (index 0)
first = candies[0]       # 10

# Grab candies 1 to 3 (slice)
some = candies[1:4]      # [20, 30, 40]

2D Example (Like a Grid)

# A 3x3 grid of numbers
grid = torch.tensor([
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9]
])

# Get row 0
row0 = grid[0]           # [1, 2, 3]

# Get element at row 1, col 2
val = grid[1, 2]         # 6

# Get all rows, column 1
col1 = grid[:, 1]        # [2, 5, 8]

Key Insight: Use start:stop:step for slicing. Negative indices count from the end!


2. Boolean and Fancy Indexing

Boolean Indexing: Ask a Yes/No Question

You can filter tensors with True/False masks — like asking “which candies are bigger than 25?”

candies = torch.tensor([10, 20, 30, 40, 50])

# Which are > 25?
mask = candies > 25     # [F, F, T, T, T]

# Get only those
big_candies = candies[mask]  # [30, 40, 50]

Fancy Indexing: Pick by List

Instead of a slice, give a list of indices you want.

candies = torch.tensor([10, 20, 30, 40, 50])

# I want index 0, 2, and 4
picks = candies[[0, 2, 4]]  # [10, 30, 50]

2D Fancy Indexing

grid = torch.tensor([
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9]
])

# Get elements at (0,0), (1,1), (2,2)
diag = grid[[0,1,2], [0,1,2]]  # [1, 5, 9]

3. Gather and Scatter

Gather: Collect Specific Items

Think of it as “reach into each row and grab a specific column.”

data = torch.tensor([
    [10, 20, 30],
    [40, 50, 60]
])

# For row 0, take col 1; row 1, take col 2
indices = torch.tensor([[1], [2]])

result = data.gather(1, indices)
# [[20], [60]]

Analogy: Like a treasure map telling you which column to pick from each row.

Scatter: Place Items Back

Opposite of gather — place values at specific positions.

target = torch.zeros(2, 3)
indices = torch.tensor([[1], [2]])
values = torch.tensor([[5.0], [9.0]])

target.scatter_(1, indices, values)
# [[0, 5, 0],
#  [0, 0, 9]]

4. Tensor Views and Contiguity

Views: Same Data, Different Shape

A view doesn’t copy data. It just shows the same memory in a new shape.

a = torch.tensor([1, 2, 3, 4, 5, 6])

# View as 2x3
b = a.view(2, 3)
# [[1, 2, 3],
#  [4, 5, 6]]

# Change b, changes a too!
b[0, 0] = 99
print(a[0])  # 99

Contiguity: Is Memory in Order?

After some operations (like transpose), memory may not be contiguous anymore.

c = b.t()  # Transpose
print(c.is_contiguous())  # False

# Make it contiguous
c_contig = c.contiguous()
print(c_contig.is_contiguous())  # True

Why It Matters: Some operations require contiguous tensors. Use .contiguous() when needed.


5. Tensor Cloning and Copying

Clone: A True Copy

.clone() creates a new tensor with the same data — changes don’t affect the original.

original = torch.tensor([1, 2, 3])
copy = original.clone()

copy[0] = 99
print(original[0])  # Still 1!

Detach: Break Gradient Connection

If you’re training a neural network, .detach() breaks the gradient link.

x = torch.tensor([1.0], requires_grad=True)
y = x.detach()  # y won't track gradients

Clone + Detach Combo

safe_copy = x.clone().detach()

6. Tensor Shape Manipulation

Reshape: Flexible Shaping

a = torch.arange(12)  # [0..11]

b = a.reshape(3, 4)   # 3 rows, 4 cols
c = a.reshape(2, -1)  # -1 means "figure it out"
# c is 2x6

Squeeze & Unsqueeze

  • Squeeze: Remove dimensions of size 1
  • Unsqueeze: Add a dimension of size 1
x = torch.tensor([[1, 2, 3]])  # Shape: 1x3

squeezed = x.squeeze()   # Shape: 3
unsqueezed = squeezed.unsqueeze(0)  # Shape: 1x3

Flatten: Make 1D

grid = torch.tensor([[1,2],[3,4]])
flat = grid.flatten()  # [1, 2, 3, 4]

Permute & Transpose

img = torch.randn(3, 32, 32)  # C, H, W

# Swap to H, W, C
img2 = img.permute(1, 2, 0)   # 32, 32, 3

# Simple transpose for 2D
mat = torch.randn(4, 5)
mat_t = mat.t()  # 5, 4

7. Tensor Joining and Splitting

Joining: Combine Tensors

torch.cat — Concatenate Along Existing Dimension

a = torch.tensor([[1, 2]])
b = torch.tensor([[3, 4]])

# Stack vertically (dim=0)
c = torch.cat([a, b], dim=0)
# [[1, 2],
#  [3, 4]]

torch.stack — Create New Dimension

x = torch.tensor([1, 2])
y = torch.tensor([3, 4])

stacked = torch.stack([x, y], dim=0)
# [[1, 2],
#  [3, 4]]
# Shape: 2x2 (new dim added)

Splitting: Divide Tensors

torch.split — Split by Size

t = torch.arange(10)

chunks = torch.split(t, 3)
# (tensor([0,1,2]), tensor([3,4,5]),
#  tensor([6,7,8]), tensor([9]))

torch.chunk — Split Into N Parts

parts = torch.chunk(t, 4)
# 4 roughly equal parts

8. Nested Tensors

What Are They?

Nested tensors hold tensors of different sizes in one container. Perfect for variable-length sequences!

# Three sentences of different lengths
a = torch.tensor([1, 2])
b = torch.tensor([3, 4, 5])
c = torch.tensor([6])

nested = torch.nested.nested_tensor([a, b, c])

Why Use Them?

  • Process batches with different sequence lengths
  • Avoid padding waste
  • Cleaner code for NLP and time-series

Accessing Elements

# Get the second element
second = nested[1]  # tensor([3, 4, 5])

Converting to Padded

# If you need uniform shape
padded = torch.nested.to_padded_tensor(
    nested, padding=0
)
# Pads shorter tensors with 0

Quick Reference Diagram

graph LR A[Tensor Manipulation] --> B[Indexing & Slicing] A --> C[Boolean & Fancy Indexing] A --> D[Gather & Scatter] A --> E[Views & Contiguity] A --> F[Clone & Copy] A --> G[Shape Manipulation] A --> H[Join & Split] A --> I[Nested Tensors] B --> B1["t[0], t[1:3]"] C --> C1["t[t > 5], t[[0,2]]"] D --> D1["gather#40;#41;, scatter_#40;#41;"] E --> E1["view#40;#41;, contiguous#40;#41;"] F --> F1["clone#40;#41;, detach#40;#41;"] G --> G1["reshape, squeeze, permute"] H --> H1["cat, stack, split, chunk"] I --> I1["nested_tensor#40;#41;"]

You Made It!

You now understand the 8 core ways to manipulate tensors in PyTorch:

  1. Indexing/Slicing — Pick elements
  2. Boolean/Fancy Indexing — Filter and select
  3. Gather/Scatter — Advanced picking and placing
  4. Views/Contiguity — Same data, different lens
  5. Clone/Copy — Safe duplicates
  6. Shape Manipulation — Reshape, squeeze, permute
  7. Join/Split — Combine or divide
  8. Nested Tensors — Variable-length magic

Next Step: Try these in a Jupyter notebook! Change values, break things, and rebuild. That’s how you learn.


Remember: Tensors are your building blocks. Master these moves, and you can build anything!

Loading story...

No Story Available

This concept doesn't have a story yet.

Story Preview

Story - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

Interactive Preview

Interactive - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

No Interactive Content

This concept doesn't have interactive content yet.

Cheatsheet Preview

Cheatsheet - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

No Cheatsheet Available

This concept doesn't have a cheatsheet yet.

Quiz Preview

Quiz - Premium Content

Please sign in to view this concept and start learning.

Upgrade to Premium to unlock full access to all content.

No Quiz Available

This concept doesn't have a quiz yet.