## Sorting (with attitude)

This post takes a look at a number of different sorting techniques: heap sort, merge sort, quick sort and quick select, counting sort, radix sort, and bucket sort.

This post takes a look at a number of different sorting techniques: heap sort, merge sort, quick sort and quick select, counting sort, radix sort, and bucket sort.

This post aims to introduce list, stack, and queue data structures with discussions about implementation and analysis.

This post aims to introduce the NP-complete complexity class. P and NP are introduced using the clique problem, and then a few different NP-complete reductions are discussed: clique, independent set, vertex cover, and dominating set.

This post explores binary heaps, including what they are, a linear time method for building a heap, heap sort, binary heaps for priority queues, and optimized heapify.

This post explores what topological sorting is all about and includes two methods for obtaining a topological ordering on a DAG: Kahn's algorithm and a DFS-based approach. The DFS approach is then used in Kosaraju's algorithm to identify strongly connected components of a graph.

This post explores single source shortest paths algorithms, specifically Bellman-Ford, Dijkstra, and finding the shortest path on a DAG.

This post explores minimum spanning trees (MSTs), specifically the cut property used by most MST algorithms, and then the most popular MST algorithms themselves: Kruskal, Boruvka, and Prim.

This post explores the union-find data structure specifically from the vantage point of how we might endeavor going about coming up with the data structure ourselves. A number of templates are then provided.

This post explores the basics of graphs. Specifically, we start by considering what graphs actually are (i.e., as a concept). Then we discuss a variety of ways to represent graphs in code. Then we move on to the two most fundamental graph traversal techniques: breadth-first search (BFS) and depth-first search (DFS).

This post explores dynamic programming. As usual, the Fibonacci numbers are used to introduce several key components of dynamic programming. This sets the stage for more advanced dynamic programming problems: rod cutting, subset sum, and Floyd-Warshall.

Dijkstra's argument that numbering should start at zero begins with an analysis of a subsequence of natural numbers, namely $2, 3, \ldots, 12$, and what *interval notation* should be preferred most to denote this sequence. Exploring his argument in more detail allows for greater understanding as to why the half-interval notation $[a, b)$ is to be preferred in a variety of programming contexts (e.g., implementing sliding windows).

This post explores asymptotic notation and analysis, namely big-$O$ notation and a variety of related properties. This sets the stage for discussing recurrence relations and ultimately the Master Theorem for analyzing the time complexity of recurrence relations. Finally, we take a look at a few isolated math topics that will proof fruitful for future analyses.

Learning Tarjan's algorithm for finding the strongly connected components (SCCs) of a directed graph can be rather difficult at first. Many learning materials sketch out the concepts and may even provide pseudocode, but few materials provide fully worked out examples along with working code. This blog post seeks to change that. Example graphs are taken from other quality learning sources (with attribution), but, importantly, working code is provided (in Python) to test Tarjan's algorithm to see the results for yourself.

Depth-first search (DFS) on a graph (binary tree or otherwise) is most often implemented *recursively*, but there are occasions where it may be desirable to consider an *iterative* approach instead. Such as when we may be worried about overflowing the call stack. In such cases it makes sense to rely on implementing DFS with our own stack instead of relying on our program's implicit call stack. But doing so can lead to some problems if we are not careful.

Monotonic stacks and queues have a reputation for being notoriously difficult. Why? I suspect the difficulty lies not in *what* these data structures are but *how* they are often used to craft solutions to various problems.

This post describes how to effectively use the `binarytree`

package for Python. Working with binary trees on LeetCode is a motivating use case for learning the package.

This post describes how to set up a virtual machine, specifically the AMD server version of Ubuntu 18.04.2, using VirtualBox on a Mac. It also describes how to set up a shared folder between host (i.e., MacOS) and guest (i.e., Ubuntu server running Linux).

This post details how to work with arbitrary bases in a positional numeral system (e.g., decimal, binary, etc.). Specifically, this post details how to convert numbers from base 10 to base $b$ in such a way that not only is the whole number portion converted but also the fractional/radix portion as well.

This post details how to modify your Docusaurus site's webpack configuration.

This post details how to automate SQL query formatting via the clipboard on a Mac. Want to copy a SQL query, run a command, and then have a nicely formatted query to paste elsewhere? This is the post for you.

This post details how to synchronize Material UI's light and dark palettes with Docusaurus's light and dark modes, respectively.

This post details how to outfit your Docusaurus site's doc pages and blog posts with a comments section using giscus.

This post details how to go about updating/installing Node.js using nvm

This post details how to go about using more than one GitHub account with SSH.

This post details how to go about deployment on an AWS EC2 instance.