Notes on
Grokking Algorithms
by Aditya Y. Bhargava
| 20 min read
The book explains each of these algorithms very well. There are also exercises to help you deepen your understanding.
I’ve studied Computer Science for years at this point, so the concepts were mostly a refresher for me. However, I still learned a few things, and I think this book would be a great introduction to algorithms for someone who hasn’t studied Computer Science.
Because of my prior knowledge, I haven’t noted down any of the explanations from the book. Instead, I’ve included a short summary of each algorithm, and a TypeScript implementation. I read this book to practice implementing the algorithms, to refresh them in my mind, and to fill out gaps in my knowledge.
You can find the full source code for these implementations here.
Binary Search
Binary search is a fast search algorithm with run-time complexity of
For this algorithm to work properly, the data should be sorted.
Binary search looks for a particular item by comparing the middle most item of the collection.
If a match occurs, then the index of item is returned.
If the middle item is greater than the item, then the item is searched in the sub-array to the left of the middle item.
Otherwise, the item is searched for in the sub-array to the right of the middle item.
export function binarySearch(arr: number[], target: number): number {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
const guess = arr[mid];
if (guess === target) {
return mid;
} else if (guess < target) {
left = mid + 1;
} else if (guess > target) {
right = mid - 1;
}
}
return -1;
}
Linked List
Linked lists are a data structure that can be used to store a collection of items.
Each item in the list is stored in a node. Each node contains a value and a reference to the next node in the list. The first node in the list is called the head. The last node in the list is called the tail.
Linked lists have some benefits over arrays. For example, adding and removing items from a linked list is
However, linked lists also have some drawbacks. For example, linked lists do not support random access. This means that if you want to access the 100th item in a linked list, you must iterate through the first 99 items in the list. This is in contrast to arrays, where you can access any item in the array in
export class LinkedList<TItem> {
private head: Node<TItem> | undefined;
public get Head(): Node<TItem> | undefined {
return this.head;
}
public Add(value: TItem): void {
const node = new Node(value);
node.Next = this.head;
this.head = node;
}
public Remove(value: TItem): void {
if (this.head === undefined) {
return;
}
if (this.head.Value === value) {
this.head = this.head.Next;
return;
}
let current = this.head;
while (current.Next !== undefined) {
if (current.Next.Value === value) {
current.Next = current.Next.Next;
return;
}
current = current.Next;
}
}
public InsertAfter(value: TItem, after: TItem): void {
if (this.head === undefined) {
throw new Error("List is empty");
}
let current: Node<TItem> | undefined = this.head;
while (current !== undefined) {
if (current.Value === after) {
const node = new Node(value);
node.Next = current.Next;
current.Next = node;
return;
}
current = current.Next;
}
throw new Error(`Value ${after} not found in list`);
}
public Contains(value: TItem): boolean {
let current = this.head;
while (current !== undefined) {
if (current.Value === value) {
return true;
}
current = current.Next;
}
return false;
}
public Count(): number {
let count = 0;
let current = this.head;
while (current !== undefined) {
count++;
current = current.Next;
}
return count;
}
public ToArray(): TItem[] {
const result = [];
let current = this.head;
while (current !== undefined) {
result.push(current.Value);
current = current.Next;
}
return result;
}
public [Symbol.iterator](): Iterator<TItem> {
let current = this.head;
return {
next(): IteratorResult<TItem> {
if (current === undefined) {
return { done: true, value: undefined };
}
const value = current.Value;
current = current.Next;
return { done: false, value };
},
};
}
}
class Node<TItem> {
private value: TItem;
private next: Node<TItem> | undefined;
constructor(value: TItem) {
this.value = value;
}
public get Value(): TItem {
return this.value;
}
public get Next(): Node<TItem> | undefined {
return this.next;
}
public set Next(value: Node<TItem> | undefined) {
this.next = value;
}
}
Selection Sort
Selection Sort is
It works by finding the smallest element in the array, and pushing it to a new array. It then removes that element from the original array, and repeats the process until the original array is empty.
This is a very inefficient sorting algorithm, but it is very simple to implement.
function findSmallest(array: number[]): number {
let smallest = array[0];
let smallestIndex = 0;
for (let i = 0; i < array.length; i++) {
if (array[i] < smallest) {
smallest = array[i];
smallestIndex = i;
}
}
return smallestIndex;
}
export function selectionSort(array: number[]): number[] {
const sortedArray = [];
while (array.length > 0) {
const smallestIndex = findSmallest(array);
sortedArray.push(array.splice(smallestIndex, 1)[0]);
}
return sortedArray;
}
Quick Sort
Quick Sort works by picking a pivot element, and partitioning the array into two sub-arrays, according to whether they are less than or greater than the pivot.
It then recursively sorts the sub-arrays.
It is
export function quickSort(array: number[]): number[] {
if (array.length <= 1) {
return array;
}
const pivot = array[0];
const less = array.slice(1).filter(x => x <= pivot);
const greater = array.slice(1).filter(x => x > pivot);
return [...quickSort(less), pivot, ...quickSort(greater)];
}
Graph Search Algorithms
Before presenting the search algorithms, I’ve included simple graph implementations for directed and directed weighted graphs. These are used in the algorithms.
Directed Graph
export class DirectedGraph<T> {
private graph: Map<T, T[]> = new Map();
public getNodes(): T[] {
return [...this.graph.keys()];
}
public getEdges(from: T): T[] {
return this.graph.get(from) || [];
}
public addNode(value: T): DirectedGraph<T> {
if (!this.graph.has(value)) {
this.graph.set(value, []);
}
return this;
}
public addEdge(from: T, to: T): DirectedGraph<T> {
this.graph.set(from, [...(this.graph.get(from) || []), to]);
return this;
}
public hasEdge(from: T, to: T): boolean {
return this.graph.get(from)?.includes(to) || false;
}
public removeEdge(from: T, to: T): DirectedGraph<T> {
if (!this.graph.has(from)) {
throw new Error("Node does not exist");
}
this.graph.set(
from,
(this.graph.get(from) || []).filter((x) => x !== to)
);
return this;
}
}
Weighted Directed Graph
type Edge<T> = { value: T; weight: number };
export class WeightedDirectedGraph<T> {
private graph: Map<T, Edge<T>[]> = new Map();
public getNodes(): T[] {
return [...this.graph.keys()];
}
public getEdges(from: T): Edge<T>[] {
return this.graph.get(from) || [];
}
public addNode(value: T): WeightedDirectedGraph<T> {
if (!this.graph.has(value)) {
this.graph.set(value, []);
}
return this;
}
public addEdge(from: T, to: T, weight: number): WeightedDirectedGraph<T> {
this.graph.set(from, [...(this.graph.get(from) || []), { value: to, weight }]);
return this;
}
public hasEdge(from: T, to: T): boolean {
return this.graph.get(from)?.map(x => x.value).includes(to) || false;
}
public removeEdge(from: T, to: T): WeightedDirectedGraph<T> {
if (!this.graph.has(from)) {
throw new Error("Node does not exist");
}
this.graph.set(
from,
(this.graph.get(from) || []).filter((x) => x !== to)
);
return this;
}
}
Graph Search
Graph search is a family of algorithms for searching a graph for a node that satisfies a given property.
The search starts at one node (usually called the root node) and ends when the desired node is found.
The two most common graph search algorithms are breadth-first search and depth-first search.
Breadth-first search starts at the root node and explores all the neighboring nodes.
It then explores all the nodes that can be reached from those nodes, and so on.
Depth-first search starts at the root node and explores as far as possible along each branch before backtracking.
It then explores the next branch, and so on.
Here is a generalized search algorithm that can be used for both breadth-first and depth-first search.
You simply need to provide the appropriate functions for the search you want to perform.
function search<T>(
predicate: (node: T) => boolean,
shouldContinue: () => boolean,
getNode: () => T,
getEdges: (node: T) => T[],
addEdgeToFrontier: (edge: T) => void
): boolean {
const visited: T[] = [];
while (shouldContinue()) {
const node = getNode();
if (predicate(node)) {
return true;
}
if (visited.includes(node)) {
continue;
}
visited.push(node);
const edges = getEdges(node);
for (const edge of edges) {
addEdgeToFrontier(edge);
}
}
return false;
}
Breadth-First Search
The BFS algorithm is particularly useful for one thing: finding the shortest path on unweighted graphs.
In BFS the frontier is built as as first-in-first-out (FIFO) queue. I.e., the path that is selected from the frontier is the one that was added earliest.
It starts at some arbitrary node of a graph and explores the neighbor nodes first, before moving on the next level neighbors.
It works basically like a queue. It starts at some node. Then it adds every unvisited neighbor to the queue. It visits the next node in the queue, and repeats this process (adding unvisited neighbors to queue). See the queue data structure to learn more about this.
export function breadthFirstSearch<T>(
graph: DirectedGraph<T>,
from: T,
predicate: (node: T) => boolean
): boolean {
const queue: T[] = [from];
const shouldContinue = () => queue.length > 0;
const getNode = () => {
const node = queue.shift();
if (!node) throw new Error("Queue is empty");
return node as NonNullable<T>;
};
const getEdges = (node: T) => graph.getEdges(node) as NonNullable<T>[];
const addEdgeToFrontier = (edge: T) => queue.push(edge);
return search(
predicate,
shouldContinue,
getNode,
getEdges,
addEdgeToFrontier
);
}
Depth-First Search
In short: explores the node branch as far as possible before being forced to backtrack and expand other nodes.
DFS is where the frontier is a last-in-first-out (LIFO) stack of paths.
Paths in a DFS can be infinite if the graph has cycles or infinitely many nodes. So this algorithm may never stop.
export function depthFirstSearch<T>(
graph: DirectedGraph<T>,
from: T,
predicate: (node: T) => boolean
): boolean {
const stack: T[] = [from];
const shouldContinue = () => stack.length > 0;
const getNode = () => {
const node = stack.pop();
if (!node) throw new Error("Stack is empty");
return node as NonNullable<T>;
};
const getEdges = (node: T) => graph.getEdges(node) as NonNullable<T>[];
const addEdgeToFrontier = (edge: T) => stack.push(edge);
return search(
predicate,
shouldContinue,
getNode,
getEdges,
addEdgeToFrontier
);
}
Dijkstra’s Algorithm
Dijkstra’s algorithm is a method for finding the shortest path between two nodes in a graph, by iteratively selecting the node with the smallest distance from the start node and updating the distances to its neighbors.
Djikstra’s works on this basic idea:
- If the shortest path from
to passes through an intermediate vertex , then the best to path must contain the to path as well. So this has to be computed before we can find the path from to . - So Djikstra’s algorithm does some rounds, where each round finds the shortest path from
to some new vertex. - And here,
is the vertex that minimizes over all unfinished vertices . is the weight of the edge from vertex to vertex , and is the length of the shortest path between them.
So, as we can see, this is a Dynamic Programming strategy.
The algorithm only works on graphs without negative-cost edges. If you need that, you can use the Floyd-Warshall Algorithm (not in this book).
export function dijkstra<T>(
graph: WeightedDirectedGraph<T>,
from: T,
to: T
): { distance: number, path: T[] } {
const distances = new Map<T, number>();
const previousNodes: Map<T, T | null> = new Map<T, T | null>();
graph.getNodes().forEach((node) => {
distances.set(node, Number.POSITIVE_INFINITY);
previousNodes.set(node, null);
});
distances.set(from, 0);
const unvisitedNodes = new Set(graph.getNodes());
while (unvisitedNodes.size > 0) {
const currentNode = Array.from(unvisitedNodes).sort(
(a, b) => distances.get(a)! - distances.get(b)!
)[0];
unvisitedNodes.delete(currentNode);
if (currentNode === to) {
break;
}
const currentDistance = distances.get(currentNode)!;
for (const neighbor of graph.getEdges(currentNode)) {
const newDistanceToNeighbor = currentDistance + neighbor.weight;
if (newDistanceToNeighbor < distances.get(neighbor.value)!) {
distances.set(neighbor.value, newDistanceToNeighbor);
previousNodes.set(neighbor.value, currentNode);
}
}
}
return {
distance: distances.get(to)!,
path: getShortestPath(previousNodes, from, to),
};
}
function getShortestPath<T>(previousNodes: Map<T, T | null>, from: T, to: T): T[] {
const path: T[] = [];
let currentNode = to;
while (currentNode !== from) {
if (!currentNode) {
throw new Error('Path not found');
}
path.unshift(currentNode);
currentNode = previousNodes.get(currentNode)!;
}
// Add the start node at the beginning of the path
path.unshift(from);
return path;
}
Greedy Programming: Set Cover
First: Greedy algorithms select the best local option from all available choices with no regard for the global structure.
In essence, it looks for local maxima and hopes it’s the global maxima.
This program implements a greedy approximation algorithm for the set-covering problem, known as the Greedy Set-Cover algorithm. The algorithm repeatedly selects the set that contains the most elements not already covered until all elements are covered.
This approximation algorithm doesn’t always find the optimal solution, but it guarantees a solution that is logarithmically close to the optimal one.
type SetType = Array<number>;
export function findMinSetCover(universe: SetType, sets: Array<SetType>): SetType[] {
const universeSet = new Set(universe);
const setArray = sets.map(set => new Set(set));
const resultSets: SetType[] = [];
while (universeSet.size !== 0) {
let maxSetIndex = 0;
let maxSetSize = 0;
for (let i = 0; i < setArray.length; i++) {
const currentSetSize = Array.from(setArray[i]).filter(value => universeSet.has(value)).length;
if (currentSetSize > maxSetSize) {
maxSetSize = currentSetSize;
maxSetIndex = i;
}
}
if (maxSetSize === 0) {
throw new Error("No set cover possible");
}
const maxSet = setArray[maxSetIndex];
resultSets.push(Array.from(maxSet));
maxSet.forEach(value => universeSet.delete(value));
}
return resultSets;
}
Dynamic Programming: Knapsack
Dynamic programming, like divide-and-conquer, solves problems by combining the solutions of subproblems.
The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.
It illustrates the dynamic programming method of solving problems by breaking them down into simpler subproblems. The subproblems in this case are smaller knapsack problems, which are solved in a bottom-up manner. The solution to the overall problem is then found by solving the subproblems, and using their solutions to gradually construct a solution to the original problem.
Our approach to solving the problem is to have a grid for each item and capacity from 1 to C (the capacity). The grid will contain the maximum value that can be obtained for each item and capacity. We will fill the grid row by row, left to right.
Given the following items and capacity of 4:
- Guitar: 1kg, $1500
- Stereo: 4kg, $3000
- Laptop: 3kg, $2000
The grid will look like this:
1 | 2 | 3 | 4 | |
---|---|---|---|---|
Guitar | $1500 (G) | $1500 (G) | $1500 (G) | $1500 (G) |
Stereo | $1500 (G) | $1500 (G) | $1500 (G) | $3000 (S) |
Laptop | $1500 (G) | $1500 (G) | $2000 (L) | $3500 (LG) |
The algorithm for filling the grid is as follows:
cell[i][j] = max(
cell[i - 1][j], // previous max
cell[i - 1][j - items[i].weight] + items[i].value // value of current item + max value of remaining capacity, which is the value of the cell in the previous row and the remaining capacity, i.e. cell[i - 1][j - items[i].weight]
)
interface Item {
name: string;
weight: number;
value: number;
}
export function knapsack(items: Item[], capacity: number): number {
// Here we create a cache to store the max value at each capacity for each item.
// The cache is a 2-dimensional array where each row represents an item and each column represents a capacity.
// The value at each row/column intersection is the max value for that item at that capacity.
// We use the cache to avoid recalculating values we've already calculated.
const cache: number[][] = [];
for (let row_idx = 0; row_idx < items.length; row_idx++) {
cache[row_idx] = [];
// This assumes that the capacity is a positive integer
// Meaning, we can't handle items that weigh fractions of a (unit of weight)
for (let col_idx = 0; col_idx <= capacity; col_idx++) {
cache[row_idx][col_idx] = -1;
}
}
return knapsackRecursive(items, items.length - 1, capacity, cache);
}
function knapsackRecursive(
items: Item[],
i: number,
capacity: number,
cache: number[][]
): number {
// Base case: if we've reached the end of the items array or the capacity is 0, return 0
if (i < 0) {
return 0;
}
// If the value is already in the cache, return it
if (cache[i][capacity] !== -1) {
return cache[i][capacity];
}
// If the weight of the current item is greater than the capacity, skip it
if (items[i].weight > capacity) {
cache[i][capacity] = knapsackRecursive(items, i - 1, capacity, cache);
return cache[i][capacity];
}
// Otherwise, return the max of two cases:
const previousMaxValue = knapsackRecursive(items, i - 1, capacity, cache);
const currentValuePlusMaxValueOfRemainingCapacity =
items[i].value +
knapsackRecursive(items, i - 1, capacity - items[i].weight, cache);
cache[i][capacity] = Math.max(
previousMaxValue,
currentValuePlusMaxValueOfRemainingCapacity
);
return cache[i][capacity];
}
Distance functions
The book only described Euclidean Distance & briefly mentioned Cosine Similarity in the context of finding similarity between points in a space.
I’ve implement a variety of approaches.
First, a helper function:
type Distance = Map<Point, Map<Point, number>>;
export interface Point {
label: string;
featureValues: number[];
}
export function calculateDistanceBetweenAllPoints(
points: Point[],
fn: (a: Point, b: Point) => number
) {
const distances: Distance = new Map();
for (let i = 0; i < points.length; i++) {
distances.set(points[i], new Map());
for (let j = 0; j < points.length; j++) {
distances.get(points[i])!.set(points[j], 0);
}
}
for (let i = 0; i < points.length; i++) {
for (let j = i; j < points.length; j++) {
const distance = fn(points[i], points[j]);
distances.get(points[i])!.set(points[j], distance);
distances.get(points[j])!.set(points[i], distance);
}
}
return distances;
}
Euclidean Distance
The Euclidean distance, the square root of the sum of the squares of the dimension differences, can be used as the distance between the two points.
export function euclidean(pointA: Point, pointB: Point) {
const sumOfSquares = pointA.featureValues.reduce((sum, value, index) => {
return sum + Math.pow(value - pointB.featureValues[index], 2);
}, 0);
return Math.sqrt(sumOfSquares);
}
Cosine Similarity
export function cosineSimilarity(pointA: Point, pointB: Point) {
const _dotProduct = dotProduct(pointA, pointB);
const magnitudeA = magnitude(pointA);
const magnitudeB = magnitude(pointB);
if (magnitudeA === 0 || magnitudeB === 0) {
throw new Error(
"Cannot compute cosine similarity: one or both of the vectors have a magnitude of zero."
);
}
return _dotProduct / (magnitudeA * magnitudeB);
}
function dotProduct(pointA: Point, pointB: Point) {
return pointA.featureValues.reduce((sum, value, index) => {
return sum + value * pointB.featureValues[index];
}, 0);
}
function magnitude(point: Point) {
return Math.sqrt(
point.featureValues.reduce((sum, value) => {
return sum + Math.pow(value, 2);
}, 0)
);
}
Manhattan Distance
export function manhattan(pointA: Point, pointB: Point) {
const sumOfAbsoluteDifferences = pointA.featureValues.reduce(
(sum, value, index) => {
return sum + Math.abs(value - pointB.featureValues[index]);
},
0
);
return sumOfAbsoluteDifferences;
}
K-Nearest Neighbor
Classification is the process of taking a data point and assigning it a label. The k-nearest neighbor algorithm is a classification algorithm that takes a data point and looks at its
For example, if you want to classify a new data point as either a cat or a dog, you can look at the
Your existing data points are points in a space, and the distance between two points is the distance between them in that space. The k-nearest neighbor algorithm uses the distance between points to determine which points are nearest to each other.
import { Point } from "./Point.ts";
export function kNearestNeighbors(
k: number,
points: Point[],
queryPoint: Point,
distanceFn: (a: Point, b: Point) => number
) {
const distances = points.map((point) => {
return {
point,
distance: distanceFn(point, queryPoint),
};
});
distances.sort((a, b) => a.distance - b.distance);
return distances.slice(0, k);
}
Main
This is what ties everything together - includes examples for use.
import { binarySearch } from "./binarysearch.ts";
import { dijkstra } from "./dijkstra.ts";
import {
calculateDistanceBetweenAllPoints,
cosineSimilarity,
euclidean,
manhattan,
} from "./distances.ts";
import { DirectedGraph } from "./graph.ts";
import { breadthFirstSearch, depthFirstSearch } from "./graphsearch.ts";
import { knapsack } from "./knapsack.ts";
import { kNearestNeighbors } from "./knn.ts";
import { LinkedList } from "./linkedlist.ts";
import { quickSort } from "./quicksort.ts";
import { selectionSort } from "./selectionsort.ts";
import { findMinSetCover } from "./setcover.ts";
import { WeightedDirectedGraph } from "./weightedgraph.ts";
if (import.meta.main) {
/* --- Binary Search --- */
const list = new Array(10_000_000).fill(0).map((_, i) => i);
console.log(`Binary Search Guess: ${binarySearch(list, 3)}`);
console.log(
"Worst case for O(log_2 n):",
Math.round(Math.log2(list.length))
);
/* --- Linked List --- */
const ll = new LinkedList<number>();
for (let i = 0; i < 10_000; i++) {
ll.Add(i);
}
/* --- Selection Sort --- */
const ss_l = new Array(10)
.fill(0)
.map(() => Math.floor(Math.random() * 100));
const ss_l_copy = [...ss_l];
console.log("Selection Sort:\n", selectionSort(ss_l), "\n", ss_l_copy);
/* --- Quick Sort --- */
const qs_l = new Array(10)
.fill(0)
.map(() => Math.floor(Math.random() * 100));
const qs_l_copy = [...qs_l];
console.log("Quick Sort:\n", quickSort(qs_l), "\n", qs_l_copy);
/* --- Graph Search Algorithms --- */
const graph = new DirectedGraph<string>();
graph
.addNode("you")
.addNode("alice")
.addNode("bob")
.addNode("claire")
.addNode("anuj")
.addNode("peggy")
.addNode("thom")
.addNode("jonny")
.addEdge("you", "alice")
.addEdge("you", "bob")
.addEdge("you", "claire")
.addEdge("bob", "anuj")
.addEdge("bob", "peggy")
.addEdge("alice", "peggy")
.addEdge("claire", "thom")
.addEdge("claire", "jonny")
.addEdge("anuj", "thom")
.addEdge("peggy", "thom")
.addEdge("thom", "jonny");
console.log(
"Breadth First Search:",
breadthFirstSearch(graph, "you", (node) => node === "jonny")
);
console.log(
"Depth First Search:",
depthFirstSearch(graph, "you", (node) => node === "jonny")
);
/* --- Dijkstra's Algorithm --- */
const graph2 = new WeightedDirectedGraph<string>();
graph2
.addNode("Twin Peaks")
.addNode("A")
.addNode("B")
.addNode("C")
.addNode("D")
.addNode("E")
.addNode("Golden Gate Bridge")
.addEdge("Twin Peaks", "A", 4)
.addEdge("Twin Peaks", "B", 10)
.addEdge("A", "D", 21)
.addEdge("B", "C", 5)
.addEdge("B", "E", 8)
.addEdge("C", "D", 5)
.addEdge("E", "D", 12)
.addEdge("D", "Golden Gate Bridge", 4);
const { distance, path } = dijkstra(
graph2,
"Twin Peaks",
"Golden Gate Bridge"
);
console.log(
`Dijkstra's Algorithm has cost ${distance} via path: ${path.join(
" -> "
)}.`
);
/* --- Greedy Programming: Set Cover --- */
const universe = [1, 2, 3, 4, 5];
const sets = [
[1, 2, 3],
[2, 4],
[3, 4],
[4, 5],
];
console.log("Universe:", universe);
console.log("Sets:", sets);
console.log("Greedy set cover:", findMinSetCover(universe, sets));
/* --- Dynamic Programming: Knapsack --- */
const maxValue = knapsack(
[
{ name: "guitar", weight: 1, value: 1500 },
{ name: "stereo", weight: 4, value: 3000 },
{ name: "laptop", weight: 3, value: 2000 },
// Without the iPhone, the knapsack would be filled with the laptop and the guitar ($3500)
// With the iPhone, the knapsack would be filled with the laptop and the iPhone ($4000)
{ name: "iphone", weight: 1, value: 2000 },
],
4
);
console.log("Max value:", maxValue);
/* --- K-Nearest-Neighbors --- */
const points = [
{ featureValues: [1, 2, 3], label: "A" },
{ featureValues: [4, 5, 6], label: "B" },
{ featureValues: [7, 8, 9], label: "C" },
{ featureValues: [10, 11, 12], label: "D" },
{ featureValues: [13, 14, 15], label: "E" },
];
const dist_euclid = calculateDistanceBetweenAllPoints(points, euclidean);
const cosine_sim = calculateDistanceBetweenAllPoints(
points,
cosineSimilarity
);
const dist_manhattan = calculateDistanceBetweenAllPoints(points, manhattan);
console.log("Euclidean Distance:", dist_euclid);
console.log("Cosine Similarity:", cosine_sim);
console.log("Manhattan Distance:", dist_manhattan);
const knn = kNearestNeighbors(2, points.slice(1), points[0], euclidean);
console.log("K-Nearest Neighbors:", knn);
}
Liked these notes? Join the newsletter.
Get notified whenever I post new notes.