← Go Back

# Composing Programs by John DeNero

Rating: 8 / 10

## Thoughts

Joy to read. Teaches the reader the fundamentals of programming. Great for beginners.

# Chapter 1: Building Abstractions with Functions

Most of this I already have notes on from 'Structure and Interpretation of Computer Programs' — so this won't be exhaustive.

# 1.1 Getting Started

Some guiding principles of debugging are:

1. Test incrementally: "Every well-written program is composed of small, modular components that can be tested individually. Try out everything you write as soon as possible to identify problems early and gain confidence in your components."
2. Isolate errors: "When trying to diagnose a problem, trace the error to the smallest fragment of code you can before trying to correct it."
3. Check your assumptions: The computer does what you tell it to. You could be wrong in your understanding.

# 1.2 Elements of Programming

Every powerful language has three such mechanisms:

• primitive expressions and statements
• means of combination (combining those expressions and statements)
• means of abstraction (giving those combinations names and manipulating them as units)

In programming, we deal with two kinds of elements: functions and data.

Data is stuff we want to manipulate, and functions describe the rules for manipulating it.

Python, multiple assignment:

x,y = 3, 4

y, x = x, y

So now x = 4 & y = 3.

# 1.3 Defining New Functions

Use good names.

When making functions, keep it's functionality in mind. Don't make it do something it's not 'supposed to'. Make it easy for others to use and understand.

Think as if someone was to use your code just by the names of your functions.

# 1.4 Designing Functions

• Each function should only have one job.
• DRY = Don't repeat yourself
• Make functions general
• Make it easy: use default argument values

# 1.5 Control

## 1.5.6 Testing

Assertations

def fib_test():
assert fib(2) == 1, 'The 2nd Fibonacci number should be 1'
assert fib(3) == 1, 'The 3rd Fibonacci number should be 1'
assert fib(50) == 7778742049, 'Error at the 50th Fibonacci number'

# 1.6 Higher-Order Functions

Functions can be used as arguments in other functions (and therefore applied in those functions). Therefore, if we have a few functions that mostly do the same, we can create another abstract function that would 'replace' those (enforcing DRY) — and then just pass in the difference in the procedure as another function.

Currying

def curried_pow(x):
def h(y):
return pow(x, y);
return h

curried_pow(2)(3) # => 8

Lambda functions

def compose1(f, g):
return lambda x: f(g(x))
# Takes x and returns f(g(x)). Has no name but behaves like any other function.

Same as above: compose1 = lambda f,g: lambda x: f(g(x)) but much less readable.

## 1.6.9 Function Decorators

An example would be a @trace decorator. Example from the book:

def trace(fn):
def wrapped(x):
print('-> ', fn, '(', x, ')')
return fn(x)
return wrapped

@trace
def triple(x):
return 3 * x

triple(12)
#=> -> <function triple at 0x102a39848> ( 12 )
#=> 36

# 1.7 Recursive Functions

A function is recursive if the body of the function calls the function itself (directly or indirectly).

So if a function calls itself inside itself.

It's all about breaking problems into smaller / simpler problems and solving those recursively.

## 1.7.1 The Anatomy of Recursive Functions

Usually the body of a recursive function begins with a base case (conditional statement for the inputs which are the simplest to process).

After which there are one or more recursive calls. For each subsequent call, there's less work left to be done.

A simple example for computing factorials, found in the book:

def fact(n):
if n == 1:
return 1
else:
return n * fact(n - 1)

## 1.7.2 Mutual Recursion

"When a recursive procedure is divided among two functions that call each other, the functions are said to be mutually recursive."

Example given:

def is_even(n):
if n == 0:
return True
else:
return is_odd(n-1)

def is_odd(n):
if n == 0:
return False
else:
return is_even(n-1)

Or as one function:

def is_even(n):
if n == 0:
return True
else:
if (n-1) == 0.
return False
else:
return is_even((n-1)-1)

### 1.7.4 Tree Recursion

When a function calls itself more than once.

Example for calculating a sequence of Fibonacci numbers (each number = sum of preceding two)

def fib(n):
if n == 1:
return 0
if n == 2:
return 1
else:
return fib(n-2) + fib(n-1)

The calls will look like branches on a tree, if you were to draw it.

# 2.1 Introduction

### 2.1.1 Native Data Types

"Every value in Python has a class that determines what type of value it is."

If two values share a class, they share behavior. They can be treated similarly.

Check the type of something by using type()

There are a few native data types in Python (ints, for example) which have these properties:

1. There are expressions that evaluate to values of native types (literals)
2. There are built-in functions and operators to manipulate values of native types.

Three types of native numeric types in Python: integers (int), real numbers (float), and complex numbers (complex).

Floats can represent a wide range of fractional numbers — but not all can be represented exactly, and there are minimum and maximum values. As such, floats should be treated as approximations to real values. This is important to remember.

# 2.2 Data Abstraction

Isolating the parts of the program that deals with how data is represented from the parts that deal with how it is manipulated is a design methodology called data abstraction.

Basically: Separating how a compound data value is used from the details of how it is constructed.

The program should make as few assumptions about the data as possible.

"The floor division operator, //, expresses integer division, which rounds down the fractional part of the result of division."

Python lists and arrays are pretty similar. There's a few differences, but they're very similar.

# 2.3 Sequences

Sequence = ordered collection of values. This is not a specific data type, but just a concept whereupon there are different variations. They do, however, share common behavior:

• Length: Finite. Starts at 0.
• Element selection: Has an element at any non-negative integer index less than its length — starting at zero. So if the length is $n$ of the sequence $S$, there'll be an element at indices$\{S_0,\ldots, S_{n-1}\}$.

Lists in Python is an example of a sequence.

Chapter introduces an implementation of linked lists as nested lists.

# 2.4 Mutable Data

Adding state to data is central to object-oriented programming (OOP).

## 2.4.1 The Object Metaphor

Objects combine data with behavior.

Following the example in the book...

from datetime import date

date is bound to a class. In this case, any individual dates would be called instances of that class.

We construct those instances by calling the class on arguments that characterize the instance. An example would be today, in terms of date:

today = date(2020, 6, 30)

Objects have attributes — named values that are part of the object. Python (like many languages) use the dot notation: expression.name. For example, the output of print(today.year) would be 2020.

Objects have methods — functions as attributes of the object. Something the 'object knows how to do'. When calling a method on an instance, that instance is 'taken as an argument' along with the other arguments.

## 2.4.3 Dictionaries

They look very similar to enums from C to me. Probably aren't that similar in reality,

numerals = {'I': 1.0, 'V': 5, 'X': 10}
print(numerals['X'])
#=> 10

## 2.4.4 Local State

Example given is:

def make_withdraw(balance):
"""Return a widthdraw function that draws down balance with each call"""
def withdraw(amount):
nonlocal balance
if amount > balance:
return 'Insufficient funds'
balance = balance - amount
return balance
return withdraw

Now we can make a pretend bank account from which we can only withdraw.

acc = make_withdraw(100)
print(acc(101))
#=> Insufficient funds
print(acc(50))
#=> 50

# 2.5 Object-Oriented Programming

Classes create abstraction barriers between the use and implementation of data.

Objects respond to 'behavioral requests' (functions — ie. 'do this').

Objects have local state — not directly accessible from global environment.

Objects have a type (its class). So to make new types of data, we make new classes.

## 2.5.1 Objects and Classes

A class is a sort of template for all the objects that have that class as their type.

When defining a class, we define which attributes and methods that it will 'contain'.

If we take the the example from earlier, a bank Account class should have a balance value and a withdraw method. It should probably also be able to return the name of the account holder, and return the current balance. And an amount for deposit.

An attribute of an object is a name-value pair associated with the object (accessible via dot notation). Attributes connected to specific objects are called Instance attributes. They are also called fields, properties, or instance variables in other languages.

Each Account would have its own holder attribute.

Methods do specific things which can depend on (and change) other attributes of the object they are called (invoked) upon.

## 2.5.2 Defining Classes

class <name>:
<suite>

Where <suite> is a suite of statements to define the attributes of the class.

The suite contains def statements that define new methods for objects of that class. We can make a constructor in python by using the method __init__(arguments). This method initializes objects.

Example:

class Account:
def __init__(self, account_holder):
self.balance = 0
self.holder = account_holder

So when christian = Account("Christian") is called, it initializes a new object for the christian variable with a starting balance of 0. self is bound to the newly created christian variable. Notice how, when the new account was created, that I only used one parameter (the account_holder string).

We can add some methods to the Account class.

class Account:
def __init__(self, account_holder):
self.balance = 0
self.holder = account_holder
def deposit(self, amount):
self.balance = self.balance + amount
return self.balance
def withdraw(self, amount):
if amount > self.balance:
return 'Insufficient funds'
self.balance = self.balance - amount
return self.balance

When we invoke a method upon an instance of the Account class, that instance is bound to the self argument, while (in this example), our argument for the call is simply an amount.

## 2.5.3 Message Passing and Dot Expressions

How does Python actually bind the object to self?

It distinguishes between functions and bound methods.

A bound method value is already associated with its first argument (the instance on which it was invoked), which will be named self when the method is called.

## 2.5.4 Class Attributes

Attributes can be shared throughout all instances of a given class.

If we continue the above example, then the bank might want an interest attribute shared across all accounts. Then they don't have to change it for every account when it changes, but only for the class itself (thereby updating it on every account).

Class attributes can also be called class variables or static variables in other languages.

Example is implemented like so:

class Account:
interest = 0.02 # <- Class attribute
# ...methods....

You can still access it from any instance. But when you change it for one instance, it changes for all instances. christian.interest = 0.04 would change it for everyone else, too.

To evaluate a dot expression:

Remember: <expression>.<name>

1. Evaluate expression. That yields to the object of the dot expression.
2. <name> is matched against instance attributes of that object. If it exists, returns the value.
3. If not, look up <name> in the class — yielding a class attribute value.
4. That value is then returned (unless it's a function, in which case a bound method is returned instead).

There are additional nuances when looking up a name in a class (because of class inheritance).

## 2.5.5 Inheritance

Different types (classes) can be related. Classes can differ in their amount of specialization.

Two classes may have similar attributes, but one represents a special case of the other. For example, a Dog could be a special case of an Animal. Or a CheckingAccount being a special case of an Account. The generic account will serve as the base class of a CheckingAccount, while CheckingAccount will be a subclass of Account.

Terms parent class and superclass are also used for base class. And child class is also used for subclass.

A subclass inherits the attributes of its base class — but it can override certain attributes and methods. When making a class that inherits from another, we only have to specify what is different — everything else would simply be the same as for the base class.

You can think of it as a is-a relationship, rather than a has-a relationship. So a Dog is a type of Animal.

## 2.5.6 Using Inheritance

When you've defined the base class, you simply define the subclass:

class ClassName(INHERITS_FROM):
# What should be different?

Where INHERITS_FROM is the name of the class that it inherits from.

Going back to the looking up in dot notation, the procedure of looking up goes through every base class in the inheritance chain for the original object's class. This can be defined recursively:

1. If the name is an attribute in the class, return the attribute value
2. If not, look up the name in the base class (if it exists)

### Interfaces

It's common in OOP that different types of objects share the same attribute names.

Object interface = collection of attributes and conditions on those attributes.

I wasn't satisfied with the explanation given here, so here's some supplementary info:

An Interface describes the actions that an object can do. If something is to be x, it has to be able to do y (and maybe more).

## 2.5.7 Multiple Inheritance

Python can do multiple inheritance. That just means that a subclass can inherit attributes from multiple base classes. To do it, simply put multiple arguments when making the class:

class ClassName(Inherits_From_1, Inherits_From_2, And_So_On)

Python resolves names from left to right, then upwards (remember looking up names).

## 2.5.8 The Role of Objects

Objects should promote a 'separation of concerns' among different parts of the program.

OOP is great in a lot of cases, but there are cases where it's not the best solution. It's up to the programmer to identify which is best in their specific case.

# 2.6 Implementing Classes and Objects

It's possible to implement OOP in programming languages which do not have it built in.

Classes and objects can be represented as dictionaries and functions.

This is what this section focuses on; the implementation of OOP without the native support.

## 2.6.1 Instances

Instances have named attributes.

We can implement an instance using a dispatch dictionary that responds to messages that get and set attribute values. Attributes are stored in a local dictionary called attributes.

Dictionaries are simply abstract data types and can be implemented in various ways (lists, for example).

## 2.6.2 Classes

A class is an object.

In Python, a class has a class called type.

# 2.7 Object Abstraction

Generic functions are a central concept in object abstraction. It's a function that can accept values of multiple different types.

This book details three techniques of implementing them: shared interfaces, type dispatching, and type coercion.

## 2.7.1 String Conversion

str makes a string human readable. repr makes it 'Python readable'.

The way that repr works is that it always invokes the method __repr__ on its argument. And by implementing this in classes we create, we can extend the functionality of repr to include our user-made classes.

str does the samme — it invokes __str__ on its argument.

These are called polymorphic functions because they 'can be applied to many (poly) different forms (morph) of data'.

A way to create such a function is to use a shared attribute name with a different definition in each class.

## 2.7.2 Special Methods

Python uses special names which are invoked by the Python interpreter in special circumstances.

• __init__ is automatically invoked when an object is constructed.
• __str__ is invoked automatically when printing.
• __repr__ is invoked in an interactive session to display values.

A few more examples

• __bool__
• __len__
• __getitem__
• __call__

## 2.7.3 Multiple Representations

We might want to represent data in multiple ways.

Interface = shared set of attribute names + how they behave.

Object attributes allow the different data types to respond to the same message in different ways.

We can calculate attributes 'on the fly' using the @property decorator in Python.

## 2.7.4 Generic Functions

### Type dispatching

"One way to implement cross-type operations is to select behavior based on the types of the arguments to a function or method. The idea of type dispatching is to write functions that inspect the type of arguments they receive, then execute code that is appropriate for those types."

### Coercion

"Often the different data types are not completely independent, and there may be ways by which objects of one type may be viewed as being of another type. This process is called coercion."

# 2.8 Efficiency

Efficiency = computational resources used by a representation or process. These resources could be time or memory, for example.

## 2.8.1 Measuring Efficicency

Remember the Tree Recursive method of calculating the Fibonacci numbers from chapter 1?

This one:

def fib(n):
if n == 0:
return 0
if n == 1:
return 1
return fib(n-2) + fib(n-1)

It's horribly inefficient. Why? Because it does so much redundant computation. It takes 13529 calls to calculate fib(19).

### Space

When an expression is being evaluated, the interpreter preserves all active environments and all values and frames referenced by those environments.

But an environment is active if it provides the evaluation context for some expression being evaluated. And it only becomes inactive whenever the function call for which its first frame was created finally returns.

"In general, the space required for tree-recursive functions will be proportional to the maximum depth of the tree."

It only needs to keep track of the nodes above the current one at any point in the computation.

For the fib function, the space requirement (measured in active frames) is one less than the input (usually small). The time requirement (measured in total recursive calls) is larger than the output, which is usually huge.

## 2.8.2 Memoization

Tree-recursive computational processes can often be made more efficient through memoization. That's not a spelling mistake, by the way.

It's a technique for increasing efficiency of recursive functions that repeat computation.

A memoized function stores the return value for any argument it has previously received. So it remembers that it has already done, and just tells the outcome — instead of doing it again.

We can create a memoization function as a higher-order function for which we can apply other functions:

def memo(f):
cache = {}
def memoized(n):
if n not in cache:
cache[n] = f(n)
return cache[n]
return memoized

Using this on the fib function from earlier, and computing fib(19), we only do 20 calls in total.

## 2.8.3 Orders of Growth

We categorize a process with a group of processes that have similar requirements. That's an easy way to analyze it.

One of these categorizations is the order of growth of a process. It's just how the resource requirements of a process grow as as a function of the input.

We use the Theta notation for this.

$n$ measures the size of the input of a process.

$R(n)$ is how much of a resource that the process requires for an input of size $n$.

This could be

• memory used,
• time required to evaluate an expression (based on number of steps the process has to take),
• or some other resource.

So we say that $R(n)$ has order of growth $\Theta(f(n))$, and we write it: $R(n)=\Theta(f(n))$. Pronounced 'theta of f(n)'. It has that order of growth if there are positive constants $k_1$ and $k_2$ independent of $n$ such that:

$k_1*f(n)\le R(n)\le k_2*f(n)$

for any value of $n$ larger than some minimum $m$. So if $n$ is large enough, it's always sandwiched between the lower bound $k_1 * f(n)$ and the upper bound $k_2 * f(n)$.

## 2.8.5 Growth Categories

"Orders of growth are designed to simplify the analysis and comparison of computational processes."

Constants do not matter. $\Theta(n)$ has the same order of growth as $\Theta(500*n)$.

This is because we can simply choose some arbitrary constants $k_1$ and $k_2$ for the upper and low bounds. So constants are always omitted from orders of growth.

Logarithms: the base does not matter. $log_2\ n$ and $log_{10} \ n$ has the same order of growth. Changing the base is the same as multiplying by a constant factor.

Nesting: "When an inner computational process is repeated for each step in an outer process, then the order of growth of the entire process is a product of the number of steps in the outer and inner processes."

Lower-order terms: As the input grows, the fasting growing part of a computation dominates the total resources used. So if we have an (otherwise) $\Theta (n)$ function that calls a $\Theta (n^2)$ function, it becomes $\Theta(n^2)$. Lower-order terms are always omitted from order of growth.

Common categories:

These are listed in slowest to fasted growth:

• Constant: $\Theta(1)$ — Growth independent of input
• Logarithmic: $\Theta(log\ n)$ — "Multiplying input increments resources"
• Linear: $\Theta(n)$ — More input = more resources
• Quadratic: $\Theta(n^2)$ — "Incrementing input adds n resources"
• Exponential: $\Theta(b^n)$ — "Incrementing input multiplies resources"

# 2.9 Recursive Objects

"When an object of some class has an attribute value of that same class, it is a recursive object."

This whole chapter is about the implementation of linked lists, trees, and sets as classes. It's really interesting, but I don't want to just copy over their implementation. If it interests you, go read it.

A linked list is a recursive object.

A linked list is composed of a first element, and the rest of the list. And the rest of the linked list is itself a linked list.

# 3.1 Introduction

"an interpreter, which determines the meaning of expressions in a programming language, is just another program."

## 3.1.1 Programming Languages

This chapter is about the design of interpreters and the computational processes they create when executing programs.

Many interpreters have a structure consisting of two mutually recursive functions:

1. Evaluate expressions in environments
2. Apply functions to arguments

They are defined in terms of each other. To apply a function, you have to evaluate the expressions in its body. To evaluate an expression, you might have to apply one or more functions.

This chapter is also based on Scheme (programming language), which is a dialect of Lisp.

# 3.2 Functional Programming

Section introduces Scheme and its syntax.

# 3.3 Exceptions

Exceptions is a way of adding error-handling logic to programs. It's a technique for interrupting the normal flow of execution, by saying that something has happened, and then going to a place in the program that is designed to deal with that situation.

Python has raise and assert which can be used for this.

To handle these exceptions, we have try and except:

try:
## code
except Exception_You_Are_Handling as Name:
## code

# 3.4 Interpreters for Languages with Combination

Metalinguistic abstraction = establishing new languages. Wikipedia says that it's "the process of solving complex problems by creating a new language or vocabulary to better understand the problem space."

This is especially important in programming, because we can both formulate and implement the languages. We do this by creating interpreters.

An interpreter is simply a function that, when applied to an expression of the language, does what is needed to evaluate it.

This chapter focuses on Scheme — and creating an interpreter for it in Python.

The interpreter would simply take string lines and return the results of evaluating those. And it'll raise an appropriate exception if the expression is not 'well formed'.

## 3.4.3 Parsing Expressions

"Parsing is the process of generating expression trees from raw text input."

And a parser has two components

• a lexical analyzer — which partitions input string into tokens (minimal syntactic units of the language — names and symbols, for example),
• and a syntactic analyzer — which constructs an expression tree from that sequence of tokens.

# 4.1 Introduction

This chapter continues the notion of sequences (for example, lists and range), extending the concept of sequential data to include collections that have no 'end' (or are infinitely large).

# 4.2 Implicit Sequences

We can have a sequence in which not every element is stored explicitly in the memory of our computer — it is, instead, computed on demand.

A range, for example, is computed on demand. This is why we can have huge ranges of integers without using a lot of memory. We only store the end points as part of the range object.

This is lazy computation = delaying the computation of something until it is needed.

## 4.2.6 Iterators

"Streams offer another way to represent sequential data implicitly. A stream is a lazily computed linked list."

A stream instance responds to requests for its first element and the rest of the stream. The rest of the Stream is also a Stream.

The rest of the stream is only computed when looked up — lazy computation.

# 4.3 Declarative Programming

Data can also be stored in large repositories called databases.

Databases consist of a data store (containing the data) and an interface for retrieving and transforming the data.

Each value stored in the database is called a record. And records with similar structure are grouped into tables.

We retrieve records using queries (statements in a query language). One such language is SQL (Structured Query Language — 'sequel').

SQL is a declarative programming language. The statements do not require computation directly, but describe the desired outcome of a computation. The query interpreter of the database system to design and perform the necessary computations to give the desired outcome.

## 4.3.1 Tables

Tables are also called relations. It has a fixed number of named and typed columns. Each row is a record, and has one value for each column.

# 4.6 Distributed Computing

We sometimes want to coordinate effort between multiple computers. In such a case, the computers are 'independent' in that they do not share memory. They communicate through messages — transferred via a network between the computers.

Messages are simply a sequence of bytes.

# 4.7 Distributed Data Processing

This chapter discusses a situation in which we want to process a dataset so large that no one computer can hold it — so it is distributed among multiple computers. Since not one computer can respond to any query, communication between them is important.

This is coordinated (in this chapter) through a programming framework called MapReduce.

# 4.8 Parallel Computing

How does a computer do things concurrently (even if there only is one processing core)?

With context switching. It's just rapidly switching between tasks without waiting for them to complete. Then multiple programs can run on one computer.

How does parallelism work?

It's about doing multiple things at once. Since there are more and more processing cores on CPUs now, applications can now take advantage of that and run things in parallel.

So the applications have to arrange the work such that it can be done (as much as possible) in parallel. But how do we do that? How do we write concurrent code? Especially taking into account a mutable shared state.

## 4.8.1 Parallelism in Python

Python has two means of parallel execution: threading and multiprocessing.

"In threading, multiple 'threads' of execution exist within a single interpreter. Each thread executes code independently from the others, though they share the same data. However, the CPython interpreter, the main implementation of Python, only interprets code in one thread at a time, switching between them in order to provide the illusion of parallelism."

### Multiprocessing

Multiprocessing allows a program to spawn multiples interpreters (or processes) each of which can run code independently.

These do not share data. So anything to be shared must me communicated between the processes.

But this is true concurrency (if the CPU has multiple cores, that is). The processes can execute in parallel according to the level of parallelism provided by the underlying OS and hardware.

## 4.8.2 The Problem with Shared State

It can happen that a thread reads a shared state (shared variable, for example) right after another read that same variable with the same value. Then it switches back and forth between them and increments the variable. But only one incrementation counted because both read the state at the same time. This is an issue with shared state.

This is called a race condition.

To avoid those, we can make it such that shared data that can be mutated (changed) and accessed by multiple threads are protected against concurrent access.

Basically, threads would only access that shared data if the other thread is 'finished' with it.

Shared data is synchronized if it is protected from concurrent access.

## 4.8.3 When No Synchronization is Necessary

No synchronization is necessary for read-only data.

Also sometimes (rarely) the case with mutable shared data. But you'll have to know how the interpreter, underlying software, and underlying hardware works in order to distinguish those cases from others.

## 4.8.4 Synchronized Data Structures

A simple way to synchronize shared data is to use a data structure that provides synchronized operations.

This is the case with a queue.

## 4.8.5 Locks

If there is no synchronized version of a particular data structure, we have to make our own.

We can do so using a lock.

A lock can be acquired by only one thread, and no other threads can access it until it's released by the thread that previously acquired it.

So threads would have to be programmed such that they don't access any shared data unless they own the lock.

## 4.8.6 Barriers

Another way to avoid conflicting access to shared data is to make your program operate in phases.

Shared data is mutated in a phase in which no other thread accesses it.

A barrier divides a program into phases, and they can only proceed when all threads have reached it. So code that is after a barrier can only be executed after the code before the barrier.

## 4.8.7 Message Passing

We can also just avoid concurrent access to the same data entirely.

This is the case with multiprocessing in Python. They use their own separate interpreters.

So state would have to be communicated between processes if they need to 'share' state.

## 4.8.8 Synchronization Pitfalls

Synchronization can be used incorrectly, causing:

• Under-synchronization: Not properly synchronizing shared access.
• Over-synchronization: Over-synchronizing a program such that non-conflicting operations cannot occur concurrently.
• Deadlock: It's a situation in which two or more threads or processes are stuck, waiting for each other to finish. If we don't release locks, a thread can get stuck. But even if we do release them, programs can still reach deadlock. The source of deadlock is circular wait. That just means that no process can continue because it's waiting for other processes — and they are waiting for it.