Tensors

The foundational data structure in RUMUS. Tensors hold multi-dimensional arrays of f32 values with automatic differentiation support built in.

Creating Tensors

Use Tensor::new to create a tensor from a flat data vector and a shape. The data is stored in row-major order.

rust
use rumus::tensor::type">Tensor;

"token-comment">// Create a 2x3 tensor from a flat vector
let t = type">Tensor::new(
    vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
    vec![2, 3],
);

"token-comment">// Access the shape
assert_eq!(t.shape(), &[2, 3]);

Shapes and Views

RUMUS distinguishes between view operations and operations that allocate new storage. View ops like reshape and transpose are zero-copy — they return a new tensor that shares the same underlying memory but with a different layout descriptor.

Layout descriptor: Each tensor has a shape, strides, and offset. Views modify these fields without touching the data buffer, making them O(1) regardless of tensor size.

rust
"token-comment">// Reshape is a zero-copy view operation.
"token-comment">// The underlying storage is shared — only the
"token-comment">// layout(shape, strides, offset) changes.
let t = type">Tensor::new(
    vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
    vec![2, 3],
);

let reshaped = t.reshape(vec![3, 2]);
assert_eq!(reshaped.shape(), &[3, 2]);

"token-comment">// Transpose also creates a view — no data is copied.
"token-comment">// The permutation vector reorders the axes.
let transposed = t.transpose(vec![1, 0]);
assert_eq!(transposed.shape(), &[3, 2]);

Arithmetic Operations

All arithmetic operations are differentiable and automatically recorded on the autograd tape when the input tensors are tracked. This includes element-wise ops, matrix multiplication, and activation functions.

rust
let a = type">Tensor::new(vec![1.0, 2.0, 3.0, 4.0], vec![2, 2]);
let b = type">Tensor::new(vec![5.0, 6.0, 7.0, 8.0], vec![2, 2]);

"token-comment">// Element-wise operations — all are differentiable
"token-comment">// and recorded on the autograd tape.
let sum = a.add(&b);
let diff = a.sub(&b);
let product = a.mul(&b);   "token-comment">// element-wise multiply

"token-comment">// Matrix multiplication
let mm = a.matmul(&b);

"token-comment">// Activation functions live in the nn module
use rumus::nn;
let activated = nn::relu(&sum);

"token-comment">// type">Dropout (only active in training mode)
let dropped = a.dropout(0.5);

Data Access

Tensor data is protected by an RwLock. Call t.data() for shared read access or t.data_write() for exclusive write access. Guards are dropped automatically when they go out of scope.

rust
let t = type">Tensor::new(vec![1.0, 2.0, 3.0], vec![3]);

"token-comment">// Read access returns an RwLockReadGuard.
"token-comment">// Multiple readers can hold this simultaneously.
{
    let data = t.data();
    println!("first element: {}", data[0]);
}   "token-comment">// guard is dropped here

"token-comment">// Write access returns an RwLockWriteGuard.
"token-comment">// Exclusive — blocks all other readers and writers.
{
    let mut data = t.data_write();
    data[0] = 42.0;
}

Storage Model

Under the hood, tensor storage is partitioned across CPU and GPU memory. The runtime tracks where the canonical copy lives and lazily synchronizes when needed. This is transparent to user code — you work with the same Tensor type regardless of device placement.

rust
"token-comment">// type">Tensor storage is partitioned into three variants:
"token-comment">//
"token-comment">//   Cpu(type">Vec<type">f32>)        — data lives on the CPU only
"token-comment">//   Gpu(wgpu::Buffer)    — data lives on the GPU only
"token-comment">//   Both { cpu, gpu, dirty }
"token-comment">//       — data exists in both locations.
"token-comment">//         The `dirty` flag tracks which copy is stale.
"token-comment">//
"token-comment">// View operations(reshape, transpose) share the same
"token-comment">// storage and only modify the Layout descriptor:
"token-comment">//
"token-comment">//   Layout { shape, strides, offset }
"token-comment">//
"token-comment">// This means reshaping a 1 GB tensor is instantaneous
"token-comment">// and uses zero additional memory.

"token-comment">// Autograd tracking is stored per-tensor:
"token-comment">//
"token-comment">//   AutogradState::type">None
"token-comment">//       — not tracked(constants, inference mode)
"token-comment">//   AutogradState::Tracked { grad_id, creator_op, is_leaf }
"token-comment">//       — participates in the computation graph