Skip to main content

4.1.1 Linear Algebra Roadmap: Data as Vectors, Batches as Matrices

Linear algebra is the language AI uses to represent data and transform it. Do not start by memorizing proofs. First see what each object does in code.

Look at the Map First

Linear Algebra Learning Map

The chapter flow is:

Linear Algebra Chapter Flow

IdeaFirst meaning in AI
vectorone object written as numbers
matrixmany vectors stacked together, or a transformation
dot productcompare matching positions and add them up
matrix multiplicationmany dot products at once
eigenvalue / eigenvectorimportant directions, useful for PCA intuition

Run the Smallest Loop

Create linear_algebra_first_loop.py and run it after installing numpy.

import numpy as np

student = np.array([90, 85, 92])
students = np.array(
[
[90, 85, 92],
[70, 88, 75],
[95, 91, 89],
]
)
weights = np.array([0.4, 0.2, 0.4])

single_score = student @ weights
all_scores = students @ weights

print("student_vector:", student)
print("matrix_shape:", students.shape)
print("single_score:", round(single_score, 2))
print("all_scores:", all_scores.round(2))

Expected output:

student_vector: [90 85 92]
matrix_shape: (3, 3)
single_score: 89.8
all_scores: [89.8 75.6 91.8]

If you accidentally use * instead of @, you get element-by-element multiplication, not a weighted score. This is the most useful early distinction.

Learn in This Order

OrderReadWhat to focus on first
14.1.2 Vectorsobject -> vector, length, dot product, cosine similarity
24.1.3 Matricesbatch data, matrix multiplication, X @ W + b
34.1.4 Eigenvalues and Eigenvectorsspecial directions, PCA intuition
44.1.5 Vector Spacesbasis, dimension, linear transformation

Pass Check

You pass this roadmap when you can explain why one sample is a vector, why a batch is a matrix, what @ does, and why these ideas reappear in RAG similarity, PCA, and neural network layers.