# Computer Science from Scratch

## Intro

Let's take Brown University's cs19 *Accelerated Introduction to Computer Science* which is 2+ semesters condensed into one semester. If you only have access to a tablet or phone you can complete this entire course as there's nothing to install.

The professor teaching the course has a unique lecture style and specializes in research driven curriculum. See his talk here about curriculum design as an engineering problem and the failure of knowledge transfer defined to mean a skill learned in one context is used successfully in another. "Even the students who have taken theory of computation (decidability) can't recognize the unsolvable problem (in another course)". He has a 2019 talk here on teaching computer science given at an acm conference on intro to programming books and the problems they have when students escape the subset of the language the book is trying to use like generating confusing error messages. He has tried to fix this in the course we are about to take.

There is a heavy focus you will notice as we go through the course labs on higher-order functions, you will have to rewrite map/filter/fold for lists, trees, streams and your own MapReduce. As this is research driven curriculum, understanding the input/output behavior of these functions was found to help students later understand gigantic system APIs and industry software libraries.

In other words every aspect of introductory CS has been carefully considered with feedback on what actually works to teach this material, instead of opinion based curriculums which is what you find everywhere else.

### Materials available

- the new book
*A Data-Centric Introduction to Computing*though I use the 2020 version - lectures & recitations on YouTube but may not last, see torrent archive below
- the Pyret online interpreter (CPO) to run code in your browser
- the assignments
- the labs at the bottom of this page (Big-O, Higher-Order Functions, Streams etc).

Many of the lectures only have about 20-30 mins of content, the rest is students being asked to work on something in the class (which you should try yourself). The book chapters aren't very long either, signal over noise.

#### Torrent archive

A torrent file w/the recorded lectures, recitations and labs.

## What is Computation

There is a book *The Computational Beauty of Nature* where recursion, parallelism, iteration and other computational properties are all exampled with nature:

Collections, Multiplicity, and ParallelismComplex systems with emergent properties are often highly parallel collections of similar units. Ant colonies owe much of their sophistication to the fact that they consist of many ants. This is obvious, but consider the implications. A parallel system is inherently more efficient than a sequential system, since tasks can be performed simultaneously and more readily via specialization. Parallel systems that are redundant have fault tolerance. If some ants die, a task still has a good chance of being finished since similar ants can substitute for the missing ones. As an added bonus, subtle variation among the units of a parallel system allows for multiple problem solutions to be attempted simultaneously. For example, gazelles as a species actively seek a solution to the problem of avoiding lions. Some gazelles may be fast, others may be more wary and timid, while others may be more aggressive and protective of their young. A single gazelle cannot exploit all of the posed solutions to the problem of avoiding lions simultaneously, but the species as a whole can. The gazelle with the better solution stands a better chance of living to reproduce. In such a case, the species as a whole can be thought of as having found a better solution through natural selection.

## Lecture 1 Pyret Demo

This course has a placement which is reading from HtDP however the book we read for the class DCIC (or if you are using the 2020 PAPL book) will cover everything you need.

Watch the cs19 lecture titled Wed 9-5-18. Go to code.pyret.org (referred to as CPO) and click *open editor* as you follow along the lecture. **The book will cover everything he's doing** like what a list is, what a function is, this is just a demonstration of the online interpreter. You can run Pyret offline too.

### First lecture

Watching the lecture, if you look up the professor's Brown university page he has a paper about on Examplar detailing how many tests the students typically write which is 30+. The whole paper is worth a read since you are doing the same course the paper is about. A good quote from the paper introduction **"students lack the metacognitive awareness to self-regulate their progress"** meaning we begin the implementation before thinking about the problem, get frustrated when our solutions don't work or are filled with bugs, and give up too early. We can mimic Examplar by writing a bunch of test cases ourselves. Note that this is not 'test driven development' or any other scheme you've heard of, it is example writing to figure out the problem domain then later testing the properties or behavior of whatever we are building, not direct values.

So how can we figure out the student given test cases for the median program? By looking up what a median is and the procedure to find it. Reading that article we see that the median is a value that seperates a finite list of numbers, that can be found by arranging them from the smallest to the greatest. If there is an odd number of entries, the middle number that splits the list into two even parts is picked. Otherwise it is defined to be the mean of the two middle values which means add them both and divide by 2, the idea being getting as close as possible to evenly splitting the data. That's why the test for [list: 1, 3, 2] is 2 because your program would reorder the list to be [list: 1, 2, 3] and choose 2 as the median.

The Fermi problems prof K talks about are released on Piazza which requires campus logins, https://piazza.com/class is a kind of message forum for university courses that encourages anonymous postings so students don't feel embarassed to ask questions. Sadly with global remote learning because of the pandemic many are now solely distribute their course materials through it and other content management software that is proprietary meaning the public doesn't have access anymore. The internet is filled with Fermi problems however you can try yourself. Looking at example interview questions for some well known wall street outfits, one of their Fermi questions is 'estimate the number of books in your city'. You aren't expected to know the answer but they want to see how you would approximate this answer.

### PAPL chapter 3 Getting Started

I went off the older book but it's at this point in 2021 almost the same as DCIC which you should use instead as it's maintained, and match up chapter titles.

Prof K's advice while reading:

If you are new to programming: people learn programming by actually writing programs and running them. As you read, type in the code in the document, see how it works, change it a little and see what happens. Actually trying the code in the assigned reading will be essential for you to get comfortable with the material.

Chapter 3 *Getting Started* read while you have CPO open (https://code.pyret.org). The examples with ">" in front of them indicate to enter this into the REPL, on the right hand side of the pyret code editor like he did in the first lecture pressing enter to eval. You can increase the font size if you click the lambda skull logo on the top left side.

### PAPL chapter 4 Naming Values

This chapter tells you exactly what happens in CPO when you click the run button or Ctrl-Enter. If you make a mistake like accidentally deleting all your code Ctrl-Z will undo on most browsers I tested it on.

The *Do Now* at end of 4.3, why did two-rects disappear from the program directory? Because of how *Run* works, it clears out the existing directory and loads a new one from the definitions in the definition window, so if you have temporary defined variables in the interactive right hand window they will be lost once you use *Run*.

4.4 all these built-ins can be looked up in the documentation if you're unsure what is going on. Copy all the code and change the images around to see what happens.

### PAPL chapter 5 Functions

- There is an optional recorded recitation for this chapter here

This chapter tells you exactly how functions are defined and run by Pyret, and what annotated parameters are which will come up in every future lecture.

Something I used to do when beginning was delete a parens and add it again to see the scope it closed off, as these nested expressions can get confusing. CPO will highlight where it begins/ends if you move arrow keys around the closing brackets. For example in 5.2.1:

frame( above(rectangle(120, 30, "solid", top), above(rectangle(120, 30, "solid", middle), rectangle(120, 30, "solid", bot)))) <-- use arrow keys here to navigate closing scope this is: frame(img) where: img = above(thing1, thing2) thing2 = above(rec1, rec2)

In *5.5 Functions Practice: Cost of pens* the pen-cost function evaluates to a number, so you can do algebra on it's output, try it in the test cases:

fun pen-cost(num-pens :: Number, message :: String) -> Number: doc: ```total cost for pens, each 25 cents plus 2 cents per message character``` num-pens * (0.25 + (string-length(message) * 0.02)) where: pen-cost(3, "wow") + pen-cost(10, "smile") is (3 * (0.25 + (string-length("wow") * 0.02))) + (10 * (0.25 + (string-length("smile") * 0.02))) # output algebra test pen-cost(4, "") * pen-cost(10, "") is 2.5 pen-cost(10, "") / pen-cost(5, "") is 2 pen-cost(0, "") is pen-cost(4, "") - pen-cost(4, "") # output is a value, so it can be used as input # this inputs 1 to pen-cost() pen-cost((pen-cost(4, "")), "") is pen-cost(1, "") end

### PAPL chapter 6 Conditionals

Conditional expressions and exactly how Pyret evaluates if-branches is defined here, all you do in this chapter is manually follow code through if-branches, and flipping through the documentation.

## Lecture 2 Rainfall Problem

Watch the Mon9/10/18 second lecture. The very first task, just imagine it in your head you don't have to actually write a program. Interesting lecture about the rainfall problem and the benefits of writing a lot of examples in order to understand badly defined problems. Data science/machine learning is brought up, as examples of dirty data. Higher-order functions are talked about, like filter and map which are in the Pyret documentation and we will learn in depth later. A filter goes through each element of the list and returns a new one based on some boolean you give it, and map applies a function to every link in the list returning a new list.

What to get out of this lecture "Data is noisy, produced by flaky hardware and buggy programs and you will have no choice but to clean the data". There's a video from the 'Advanced R' book author at the end, which is here and the gist of that talk is reducing things based on what is similar so you can generalize. We haven't done map/filter etc., but will quickly catch up by lecture 4. Reminder we skipped the placement where some of this was introduced.

### How to write examples

This recitation video talks about good examples and the program directory meaning what is in scope. The end introduces table shaped data ie: databases and vectors.

### PAPL chapter 7 & 8 Tabular data

- There is a recorded recitation for these chapters here

This is actually an SQL introduction, and a linear algebra introduction (tabular data can be represented as vectors) in the previous versions of PAPL he noted why they composed these table row operations to be independent of each other so combining them in complex expressions is possible just like SQL or Excel spreadsheets that every trader still uses. This chapter has a good writeup about having a clear error model to think through.

There's an exercise using only booleans, how would you write the function is-email(e :: string) –> Boolean? Look through the docs.

fun is-email(e :: String) -> Boolean: doc: "Incorrect program to verify is-email" # boolean AND (boolean OR boolean OR boolean) string-contains(e, "@") and (string-contains(e, ".com") or string-contains(e, ".edu") or string-contains(e, ".org")) where: is-email("a@wrongdomain.no") is false is-email("b@test.com") is true is-email("ckljfk.edu") is false is-email("d@d.edu") is true # these two last test cases fail of course # you can make them pass using string functions in the library to completely parse the input, try it is-email("@.edu") is false is-email("a@@.edu") is false end

The advice in this chapter is to force inputs through a structure like a form to ensure they are complete instead of allowing any arbitrary user supplied input.

### Recitation: Lambdas

This recitation explains anonymous functions using table row operations as examples, at this stage they're just functions without a binding to an identifier, later we will learn how they give us things like closures and much later when we do the end of the book that covers language theory, we will see it's just lambdas all the way down.

Play around in CPO:

# create anon function, bind it to test: test = lam(x :: Number): x + 1 end >>>test(2) 3 >>>test(2 + test(2)) 6

## Recursion

In this example len1 calls len2 who calls len3 then repeats until the input is empty returning 0 which stops the ping-pong back and forth computation. If you remove the numbers from lenX and write a recursive function where len calls itself, conceptually this is still the same as calling multiple similar functions.

# Silly recursion simulator fun len3(input): cases (List) input: | empty => 0 | link(f, r) => 1 + len1(r) end end fun len2(input): cases (List) input: | empty => 0 | link(f, r) => 1 + len3(r) end end fun len1(input): cases (List) input: | empty => 0 | link(f, r) => 1 + len2(r) end end check "Recursion simulator": a = [list: 1, 2, 3] len1(a) is 3 end

Hand step that program using cut + paste substitution:

- len1([list: 1, 2, 3])
- 1 + len2([list: 2, 3])
- 1 + (1 + len3([list: 3]))
- 1 + (1 + (1 + len1(empty)))
- 1 + (1 + (1 + 0))

Rewrite it so len calls itself:

- len([list: 1, 2, 3])
- 1 + len([list: 2, 3])
- 1 + (1 + len([list: 3]))
- 1 + (1 + (1 + len(empty)))
- 1 + (1 + (1 + 0))

A lot of the intro courses make the concept of self-reference way more difficult than it actually is. The first course I took I was completely confused how the state worked since the function was being called again, not realizing it is a totally new and different copy of the function. This confusion is common.

Instead of using a delayed computation where you wait for all the calls to len to return, try a recursive function that doesn't use 're-entrancy' meaning the computation finishes before calling itself again, there doesn't need to be re-entrancy of concurrent calls sharing the same function (and in turn generating many stack frames, something we will also learn later) in order for a computation to be recursive.

fun len(input, accumulator): cases (List) input: | empty => accumulator | link(f, r) => len(r, 1 + accumulator) end where: b = [list: 1, 2, 3] len(b, 0) is 3 end

Hand step:

- len([list: 1, 2, 3], 0)
- len([list: 2, 3], 1)
- len([list: 3], 2)
- len(empty, 3)
- 3

In the book SICP when you learn this, they point out the difference in shapes between recursive re-entrant code growing bigger and the shape of an iterative or accumulator implementation. My implementation requires annoying parameters like initializing the accumulator to zero, which can be hidden from the end user by use of a helper function, this is often in intro books called a trampoline you eval the code by going to the bottom of the function then jumping back into the body:

fun list-length(input): fun helper(i, acc): cases (List) i: | empty => acc | link(f, r) => helper(r, 1 + acc) end end helper(input, 0) #internal init of accumulator where: c = [list: 1, 2, 3] list-length(c) is 3 #nicer interface, single list parameter end

Same function with optional type annotations, type 'A' and 'B' you will read soon are type variables, allowing this len function to consume bools, numbers, strings yet still be type checked in Pyret.

fun len<A>(input :: List<A>) -> Number: fun helper<B>(i :: List<B>, acc :: Number) -> Number: cases (List) i: | empty => acc | link(f, r) => helper(r, 1 + acc) end end helper(input, 0) where: c = [list: 1, 2, 3] d = [list: "a", "b", "c"] e = [list: true, false] len(c) is 3 len(d) is 3 len(e) is 2 end

### Recitation: Recursion & Lists

An optional tutorial on lists, case syntax and recursion when reading chapter 9 and 10. 'Recursion means you do not need to know how to solve the entire problem in one go, you only need to know how to do one step' or as Robert Harper puts it in his book *Programming in Standard ML* you unroll the recursion one step, then repeat that step through recusively descending down a structure to it's base case which in a list structure is the empty case.

### PAPL chapter 9 & 10 Lists

**10.42 my-pos-nums**

The exercise is to hand step the function thinking about the examples, then copy the code and tests to CPO:

fun my-pos-nums(l): cases (List) l: | empty => empty | link(f, r) => if f > 0: link(f, my-pos-nums(r)) else: my-pos-nums(r) end end where: # exercise: lists ending with positive, or with 0 my-pos-nums([list: 1, -2, 3, 4]) is [list: 1, 3, 4] my-pos-nums([list: -2, 0, -4]) is empty end

Hand step my-pos-nums program thinking about one link at a time:

my-pos-nums([list: 1, -2, 3, 4]) f = 1, r = [list: -2, 3, 4] is 1 > 0? link(1, my-pos-nums([list: -2, 3, 4])) f = -2, r = [list: 3, 4] is -2 > 0? my-pos-nums([list: 3, 4]) f = 3, r = [list: 4] is 3 > 0? link(1, link(3, my-pos-nums([list: 4])) f = 4, r = empty is 4 > 0? link(1, link(3, link(4, my-pos-num(empty)))) f = empty is list empty? return empty as per cases statement No more delayed computations, start with inner bracket and produce values: link(1, link(3, link(4, empty))) link(1, link(3, [list: 4])) link(1, [list: 3, 4]) [list: 1, 3, 4]

Link syntax is link(element, list) and all those link element operations are pending while waiting to resolve a list it can link the element to, once you get to the empty case it can resolve any delayed computations.

**10.5.1 my-max**

The exercise "Suppose num-max were not already built in, can you define it?" Try some examples, the function writes itself:

my-num-max(1, 2) is 2 my-num-max(1, -1) is 1 my-num-max(2, 2) is 2

**10.7 Accumulators**

*"Observe that we do not change my-running-sum itself to take extra arguments. There are multiple reasons for this"*. One reason is you want simple documentation, user intuitive and bug-free access to my-alternating() just like a typical command shell utility such as change directory (cd) you wouldn't have users init your accumulators with cd(empty, 0, dir-name) you would instead want a simple interface where all internal operations are encapsulated that only takes a single input.

Try the exercise for my-max with an accumulator:

fun my-max(l :: List<Number>) -> Number: doc: "my-max with an accumulator exercise" fun helper(acc :: Number, e :: List<Number>) -> Number: cases (List) e: | empty => acc | link(f, r) => # if f is bigger than our accumulator, it becomes the new accumulator if f > acc: helper(f, r) else: helper(acc, r) end end end helper(0, l) where: my-max([list: 2, 1, 4, 3, 2]) is 4 my-max([list: 1, 4, -3, 4]) is 4 my-max([list: 2]) is 2 my-max([list: 0, 0.1, -0.11]) is 0.1 my-max([list: 1/3, 1/4]) is 1/3 end

**10.7.2 my-alternating**

I wrote this with a helper function, using polymorphic type annotations as defined end of this chapter. Try clicking on the Pyret Run button and change it to *Type-Check and Run* for static type checking. These type annotations 'T' and 'A' mean the function can accept any type such as booleans, strings, numbers, where the program will infer the type based on what the inputs are. If you're wondering T, A are just variables you can change to anything so long as they aren't existing types already defined:

Example, my-alt could also be:

fun my-alt<Anything>(e :: List<Anything>, keep :: Boolean) -> List<Anything>:

fun my-alternating<T>(l :: List<T>) -> List<T>: doc: "my-alternating with polymorphic type annotations" fun my-alt<A>(e :: List<A>, keep :: Boolean) -> List<A>: cases (List) e: | empty => empty | link(f, r) => if keep: link(f, my-alt(r, false)) else: my-alt(r, true) end end end my-alt(l, true) where: # 3 different types my-alternating([list: 1, 2, 3, 4, 5, 6]) is [list: 1, 3, 5] my-alternating([list: "a", "b", "c"]) is [list: "a", "c"] my-alternating([list: true, false, true]) is [list: true, true] end

**10.8.4 uniq-rec3**

ur = uniq-rec(r) means it called uniq-rec with the rest of the list and now is only checking that returned distinct rest instead of having the first element in uniq-rec3 linearly go through the entire rest. If the rest of the list is a billion entries, many of them dupes, then you saved yourself 1 iteration of a billion comparison checks by slicing off the first element and comparing it to an already distinct rest.

**Polymorphic Types**

In 10.9 my-max<T>(l :: List<T>) -> T: as written above for my-alternating, T means the list can be any type <T>, but it's first element determines what type the rest will be (string, number, boolean, custom type). Why does my-max return type T? Maybe there is a notion of max string or max custom type instead of always returning a number. This will come up in lecture numerous times and later when we learn parameterization over types.

## Lecture 3 Insertion Sort

We're watching Wed9/12/18 lecture on sorting. We have almost caught up to the class now.

He walks through exactly what a template is and how to extract data to make a template, and spends the rest of the class writing insertion sort using only what we know so far. You have to hand step through it a little to understand what insertion sort is doing. Function sort is calling insert with insert(f, sort(r)) it isn't calling insert(f, r). This means insert f to sort(r), which is the sorted rest of the list.

This is the code on the board which you should have typed into CPO yourself:

fun insert(n :: Number, l :: List<Number>) -> List<Number>: cases (List) l: | empty => [list: n] | link(f, r) => if n <= f: link(n, l) else: link(f, insert(n, r)) end end where: # student given examples: insert(2, [list: 3,4,5]) is [list: 2,3,4,5] insert(1, [list: 0,1,1,2]) is [list: 0,1,1,1,2] insert(-1, [list: -2,0,3]) is [list: -2,-1,0,3] insert(3, [list: ]) is [list: 3] insert(4, [list: 1,2,3]) is [list: 1,2,3,4] end fun sort(l :: List<Number>) -> List<Number>: cases (List) l: | empty => empty | link(f, r) => insert(f, sort(r)) end where: sort([list: 3,2,1]) is [list: 1,2,3] sort([list: 1,2,3]) is [list: 1,2,3] sort([list: 4,1,-1]) is [list: -1, 1, 4] sort([list: 0.1, 0, 0.11]) is [list: 0, 0.1, 0.11] end

Let's step the evaluation by hand with [list: 0, 1, 3, 2] being input to sort(). Reaching the cases we have f = 0, r = [list: 1, 3, 2]. We call insert on f to sort(r) but computation is delayed as we have to wait for the calls to sort(r) to complete since everytime we enter the cases with a non empty list to sort, it calls itself with the rest of the list. We end up with this delayed computation:

- insert(0, (insert(1, (insert(3, (insert(2, empty)))))))

Evaluation starts at the most nested bracket, which is insert(2, empty):

- insert(2, [list: empty])
- n = 2, l = empty
- is list empty? yes so return [list: 2]

- insert(3, [list: 2])
- n = 3, f = 2, r = [list: empty]
- is 3 < 2?
- no so link(2, insert(3, [list: empty])
- return [list: 2, 3]

- insert(1, [list: 2, 3])
- n = 1, f = 2, r = [list: 3]
- is 1 < 2?
- then it must the smallest in the entire list:
- link(1, [list: 2, 3])
- return [list: 1, 2, 3]

- insert(0, [list: 1, 2, 3])
- is 0 < 1?
- link(0, [list: 1, 2, 3])
- return [list: 0, 1, 2, 3]

- no more delayed computations

The returns to sort():

- insert(0,(insert(1,(insert(3, [list: 2])))))
- insert(0,(insert(1, [list: 2, 3])))
- insert(0, [list: 1, 2, 3])

Everytime a list is returned, the next call to insert can happen because it is complete with insert(n, list).

## Assignment 1: DocDiff

The same algorithm is covered in this lecture at around 33:11 note where he shows the 3D diagram that we are calculating an angle to determine the distance between words (strings) in a document.

This assignment writeup seems purposely difficult to get us to write examples to help understand what the assignment wants. The dot product algebraic definition, look at the example for [1,3,-5] x [4,-2,-1]. Each index in the first list is multiplied by it's corresponding index in the second list, then those results are added together. To read the sigma notation, a*b = the sum of \(a_ib_i\) starting at index 1 (first element of the list) up to the length of the list (n). If a = [1_{1} ,3_{2}, -5_{3}] and b = [4_{1}, -2_{2}, -1_{3}]:

\(a*b = \sum_{i=1}^3 a_ib_i\) = (\(1_{a1} * 4_{b1})+(3_{a2} * -2_{b2})+(-5_{a3} * -1_{b3})\)

Example vectors:

test | zest | |

[list: "test"] | 1 | 0 |

[list: "zest"] | 0 | 1 |

You may want to read the Pyret style guide you can also go through the Pyret github code, looking at how they write functions.

**Stages of the Design Recipe**

Try the design recipe for this assignment. Recall Prof K's paper where he tells us exactly how to start these assignments: write 'interesting examples' prior to any implementation. Interesting means an assertion that probes our understanding of the problem.

First step, analyze the values the program is working with, both input and output. The input is two lists of strings, who's case we ignore, and the output will be a value with square roots involved (irrational numbers), so we are dealing with Roughnums. Write the purpose and stub of the function:

fun overlap(doc1 :: List<String>, doc2 :: List<String>) -> Number: doc: "Consumes 2 documents, returns their overlap" ... where: end

**Step 2**
Understand the data definition. We need a distinct combined list of both documents, that ignores case, that is sorted, to represent the word freq count of each document.

Handwritten examples to figure out the data definition Example input to overap: doc-1 = [list: "a", "B", "c"] doc-2 = [list: "d", "d", "D", "b"] Distinct/lowercase and sorted combination of doc-1/doc-2 doc-1 + doc-2 [list: "a", "b", "c", "d"] Vector of both against [list: "a", "b", "c", "d"] doc-1-vec = [list: 1, 1, 1, 0] # 1a, 1b, 1c, 0d doc-2-vec = [list: 0, 1, 0, 3] # 0a, 1b, 0c, 3d

**Step 3**
Write examples of calling the function (tests). From the 2015 chapter on examples: "Examples should cover at least all the different cases mentioned in the data definition, and also to test common extremes and base cases."

fun overlap(doc1 :: List<String>, doc2 :: List<String>) -> Number: ... where: #vector [list: 1, 1, 0] and [list: 0, 0, 1] overlap([list: "a", "b"], [list: "c"]) is 0 # vector [list: 1] overlap([list: "a"], [list: "a"]) is 1 / num-max(num-sqr(num-sqrt(1)), num-sqr(num-sqrt(1))) # cover the upper/lower case: # vector [list: 1, 1, 1, 0] and [list: 0, 1, 0, 3] epsilon = 0.000000000000001 num-abs(((overlap([list: "a", "B", "c"], [list: "d", "d", "d", "b"])) - (1 / num-max(num-sqr(num-sqrt(3)), num-sqr(num-sqrt(10)))))) < epsilon is true end

In these examples I calculated the dot product by hand and input them in the tests:

dot-product example Both: [list: 1, 1, 1, 0] [list: 0, 1, 0, 3] -------------------- 0 + 1 + 0 + 0 = 1 Itself: [list: 0, 1, 0, 3] [list: 0, 1, 0, 3] --------------------- 0 + 1 + 0 + 9 = 10 Itself: [list: 1, 1, 1, 0] [list: 1, 1, 1, 0] --------------------- 1 + 1 + 1 + 0 = 3

If you have two different documents, like [list: "a"] and [list: "b"], then their vectors are [list: 1, 0] and [list: 0, 1] representing [list: a, b] the distinct list of both inputs. The dot-product = (1*0) + (0*1) or 0, and 0 divided by anything is 0. If the lists are both the same, like [list: 1] and [list: 1] then dot-product = 1 and our overlap equation is 1/1 or 1 .

The last test uses an epsilon displacement check which for the purposes of this assignment is any arbitrarily tiny displacement to use for equality checking. We take the output of overlap, subtract it from the output of the manual test, convert it to an absolute value which just means remove any negative signs and return a positive number, and see if this is less than our epsilon. Cut and paste the hand made test and run it, you'll see the output is ~0.099999.. a Roughnum we can't compare for equality with standard equals. There is a built-in 'is-roughly' to use in a test block instead of 'is' but the assignment wants us to write our own.

Write more tests, such as an epsilon displacement test for 2 bigger documents that are the same, since the output will be a Roughnum like ~0.999999 you can't directly compare to 1.

**Step 4**
Write the function. The tests we wrote gave us our overlap equation:

# import the list library import lists as L fun overlap(doc1 :: List<String>, doc2 :: List<String>) -> Number: doc: "Consumes 2 documents, returns their overlap" # you write these doc-1-lowercase = ??? doc-2-lowercase = ??? # .sort() is a method on any list, try it yourself enter [list: 2, 1, 3].sort() # as per documentation '+' appends two lists together distinct-sorted = L.distinct(doc-1-lowercase + doc-2-lowercase).sort() # you write these vec-1 = create-vector(???) vec-2 = create-vector(???) dot-product(vec-1, vec-2) / num-max(num-sqr(num-sqrt(dot-product(vec-1, vec-1)), num-sqr(num-sqrt(dot-product(vec-2, vec-2))))) where: ... tests using an epsilon displacement end

Now you use the exact same method to write dot-product, and create-vector. You can test properties of create-vector(doc-1) for example the vector output, if you add up every element in the list it should equal the length of doc-1-lowercase. You could then bombard create-vec with random inputs instead of testing for specific edge cases/values.

#The folded sum of [list: 1, 1, 1, 0] is 3 and [list: "a", "b", "c"].length() is 3 create-vector([list: "a", "b", "c", "d"], [list: "a", "b", "c"]) .foldl(lam(acc, x): x + acc end,0) is [list: "a", "b", "c"].length()

The output of create-vector is a list, so create-vector(params) becomes [list: ] after evaluation, you can then do all kinds of list operations on the results like [list-output].foldl() and I put .foldl() on a seperate line for readability. We learn what foldr and foldl are in the next chapter but you can rewrite .foldl in this test to a simple cases on (List) adding up each f in the vector link and returning the sum.

## Lab: Higher-Order Functions

Let's do the lab on functions as data, and write our own map, filter and fold.

fun fun-plus-one(num :: Number, func :: (Number -> Number)) -> Number: func(num) + 1 end >>fun-plus-one(16, num-sqrt) >>5 >>fun-plus-one(3, num-sqr) >>10

Try entering various built-ins, or one you wrote yourself, as the second parameter to fun-plus-one. In case you get hung up on syntax 'func' parameter can by anything, and try manipulating the (Number -> Number) annotation to ( -> Number) or remove the annotation completely to (num, f) and the function still works.

**Map**

We're given test cases and asked to implement f-to-c and goldilocks, and given the formula for f to c temperature unit conversion.

fun f-to-c(f-lst :: List<Number>) -> List<Number>: cases (List) f-lst: | empty => empty | link(f, r) => link((f - 32) * (5/9), f-to-c(r)) end end fun goldilocks(f-lst :: List<Number>) -> List<String>: cases (List) f-lst: | empty => empty | link(f, r) => if f > 90: link("too hot", goldilocks(r)) else if f < 70: link("too cold", goldilocks(r)) else: link("just right", goldilocks(r)) end end end check: f-to-c([list: 131, 77, 68]) is [list: 55, 25, 20] goldilocks([list: 131, 77, 68]) is [list: "too hot", "just right", "too cold"] end

map(lam(x): x + 1 end, f-lst) the 'x' input is the element in each link in the list f-list, you are mapping over the whole list one element at a time, applying the lambda function to each element and returning a new list. We are asked to implement goldilocks using map:

# lambda notation fun goldilocks(f-lst :: List< Number>) -> List<String>: f-lst.map(lam(x): if x > 90: "too hot" else if x < 70: "too cold" else: "just right" end end) end check: goldilocks([list: 131, 77, 68]) is [list: "too hot", "just right", "too cold"] end # shorthand syntax for lambdas from lab documentation fun goldilocks2(f-lst :: List< Number>) -> List<String>: f-lst.map({(x): if x > 90: "too hot" else if x < 70: "too cold" else: "just right" end}) end check: goldilocks2([list: 131, 77, 68]) is [list: "too hot", "just right", "too cold"] end

.map() is a built-in list method, where f-lst.map(…) means we are mapping over f-lst. We could have also rewritten it like the following example where we consume a function of type A (number) that produces a different type B (string, boolean):

fun goldilocks(x :: Number) -> String: if x > 90: "too hot" else if x < 70: "too cold" else: "just right" end end fun goldbool(x :: Number) -> Boolean: if (x > 90) or (x < 70): false else: true end end fun generic-map<A,B>(f :: (A -> B), f-lst :: List<A>) -> List<B>: map(f, f-lst) end check: generic-map(goldilocks, [list: 131, 77, 68]) is [list: "too hot", "just right", "too cold"] generic-map(goldbool, [list: 131, 77, 68]) is [list: false, true, false] end

Write your own version of map. I used the documentation examples for map as check tests. Remember map consumes a function and a list, applies the function to each entry in the list returning a new list.

fun my-map<A,B>(func :: (A -> B), l :: List<A>) -> List<B>: cases (List) l: | empty => | link(f, r) => ... my-map(func, r) check: my-map(num-to-string, [list: 1, 2]) is [list: "1", "2"] my-map(lam(x): x + 1 end, [list: 1, 2]) is [list: 2, 3] my-map(lam(x): true end, [list: "test", "test2"]) is [list: true, true] end

**Filter**

First two assignments are straight forward to get you to used to using filter, except you have to use string-to-code-points() instead of any built-in string functions.

fun tl-dr(lol :: List<String>, length-thresh :: Number) -> List<String>: filter(lam(element): string-to-code-points(element).length() <= length-thresh end, lol) end check: tl-dr([list: "dkfjdkj", "hi", "dkfjk"], 2) is [list: "hi"] tl-dr([list: "corner", "case", ""], 2) is [list: ""] tl-dr([list: "a", "b", "c"], 0) is empty end fun eliminate-e(words :: List<String>) -> List<String>: doc: "I got 101 and 69 from string-to-code-points(e) and (E)" filter(lam(x): not((string-to-code-points(x).member(101)) or (string-to-code-points(x).member(69))) end, words) end check: eliminate-e([list: "e"]) is empty eliminate-e([list: "there's", "no", "letter", "e", "here"]) is [list: "no"] eliminate-e([list: "hEy", "101e"]) is empty end

Lambda parameters can be a single letter: filter(lam(x): x > 1 end) as it's immediately evident what that parameter is and it's scope is limited though you can add annotations like lam(x :: Number). You can also ignore the parameter if you don't plan on using it replacing x with underscore. Task: implement our own version of filter, which is similar to what we did for map:

fun my-filter<T>(func :: ( T -> Boolean), l :: List<T>)-> List<T>: cases (List) l: | empty => empty | link(f, r) => ... end end check: fun length-is-one(s :: String) -> Boolean: string-length(s) == 1 end my-filter(length-is-one, [list: "ab", "a", "", "c"]) is [list: "a", "c"] my-filter(is-link, [list: empty, link(1, empty), empty]) is [list: link(1, empty)] my-filter(lam(x): x > 0 end, [list: 0, 1, -1]) is [list: 1] end

**Fold**

Try the two tasks: list-product and list-max:

fun list-product(lon :: List<Number>) -> Number: fold(lam(acc, n): acc * n end, 1, lon) end check: list-product([list: 2, 2, 2]) is 8 list-product([list: 0, 1, 2]) is 0 list-product([list: -1, -1]) is 1 end

There's a bug in list-max to fix, try all negative examples Found by https://github.com/rand-anon-007

fun list-max(lon :: List<Number>) -> Number: fold(lam(acc, n): if n > acc: n else: acc end end, 0, lon) end check: list-max([list: -1, 1, 2]) is 2 list-max([list: 0, -100, 1]) is 1 list-max([list: 1/2, 3/4]) is 3/4 end

The problem is the accumulator needs to be initialized with the first value of the list not 0:

fun list-max(lon :: List<Number>) -> Number: fold(lam(acc, n): if n > acc: n else: acc end end, lon.get(0), lon) where: list-max([list: -1, 1, 2]) is 2 list-max([list: 0, -100, 1]) is 1 list-max([list: 1/2, 3/4]) is 3/4 list-max([list: -1, -2, -3]) is -1 list-max([list: 0]) is 0 # list-max(empty) what about this case? what's the max of empty? end

If the list is empty then long.get(0) raises an error:

fun list-max(lon :: List<Number>) -> Number: if lon == empty: 0 # is this correct? else: fold(lam(acc, n): if n > acc: n else: acc end end, lon.get(0), lon) end where: list-max([list: -1, 1, 2]) is 2 list-max([list: 0, -100, 1]) is 1 list-max([list: 1/2, 3/4]) is 3/4 list-max([list: -1, -2, -3]) is -1 list-max(empty) is 0 end

Now we have to make design decisions, should it eval to 0 for empty since the max of an empty list is nothing or should it raise an error that we can handle?

fun list-max(lon :: List<Number>) -> Number: if lon == empty: raise("List can't be empty") else: fold(lam(acc, n): if n > acc: n else: acc end end, lon.get(0), lon) end where: list-max([list: -1, 1, 2]) is 2 list-max([list: 0, -100, 1]) is 1 list-max([list: 1/2, 3/4]) is 3/4 list-max([list: -1, -2, -3]) is -1 list-max(empty) raises "List can't be empty" end

Write your own fold:

fun my-fold<A,B>(func :: (B, A -> B), acc :: B, l :: List<A>) -> B: cases (List) l: | empty => acc | link(f, r) => my-fold(func, func(acc, f), r) end end check: my-fold((lam(acc, elt): acc + elt end), 0, [list: 3, 2, 1]) is 6 # set accumulator to something not 0 my-fold((lam(acc, elt): acc + elt end), 10, [list: 3, 2, 1]) is 16 fun combine(acc, elt) -> String: tostring(elt) + " - " + acc end my-fold(combine, "END", [list: 3, 2, 1]) is "1 - 2 - 3 - END" my-fold(combine, "END", empty) is "END" end

That type signature func :: (B, A -> B) means the function you are taking as input has 2 parameters, since fold has an accumulator.

**Map2**

Try the task for implementing who-passed, the second task we're asked to write map2 ourselves:

# link(ff, rr) and link(f, r) can be called anything: # ie: link(head-first, tail-first) or link(head-second, tail-second) fun my-map2<A,B>(func :: (A, A -> B), list1 :: List<A>, list2 :: List<A>) -> List<B>: cases (List) list1: | empty => empty | link(f, r) => cases (List) list2: | empty => empty | link(ff, rr) => ... what goes here? end end end check: my-map2(string-append, [list: "mis", "mal"], [list: "fortune", "practice"]) is [list: "misfortune", "malpractice"] my-map2(_ + _, [list: "mis", "mal"], [list: "fortune", "practice"]) is [list: "misfortune", "malpractice"] my-map2(string-append, [list: "mis", "mal"], [list: "fortune"]) is [list: "misfortune"] my-map2(string-append, [list: "mis", "mal"], empty) is empty # test type signature my-map2(lam(x, y): if (x > 0) and (y > 0): true else: false end end, [list: 0, 1], [list: 2, 3]) is [list: false, true] end

The last task, best-price, look on youtube what a basic demand function is. Here's an example:

fun demand(price-increase :: Number) -> Number: doc: "1000 is approx demand of product at a fixed price" 1000 - (60 * price-increase) end

This means as the price increases, the quantity demanded will decrease by 60 units times the price increase, so if the price increase is 1, the demand is 1000 - 60.

**Map vs Filter vs Foldl vs Foldr**

map and filter only consider one element in a list at a time whereas fold can consider the entire list. You can implement both map and filter with fold (try it). If you've noticed in the Pyret documentation there is foldl and foldr:

import lists as L L.foldl(lam(acc, x): acc + x end, 0, [list: 1, 2, 3]) # (((0 + 1)+ 2)+ 3) # eval nested brackets first, left to right L.foldr(lam(acc, x): acc + x end, 0, [list: 1, 2, 3]) # (1 +(2 +(3 + 0))) # eval nested brackets right to left

Here's an example of why you'd want to use foldr, making your own map function:

import lists as L fun my-map<A,B>(func :: (A -> B), list1 :: List<A>) -> List<B>: L.foldr(lam(acc, x): link(func(x), acc) end, empty, list1) where: my-map(num-to-string, [list: 1, 2]) is [list: "1", "2"] my-map(lam(x): x + 1 end, [list: 1, 2]) is [list: 2, 3] my-map(lam(x): true end, [list: "test", "test2"]) is [list: true, true] end

### PAPL chapter 11 *Structured Data*

We are almost caught up the course now, imagine if you took this course for credit how much work you'd have to put in the first 2 weeks to keep up. This is why it is called 'accelerated'. Here we learn what a data definition is which we saw a little of in the first lecture. Reading *Introduction to Structured Data*, we're making our own types here and doing cases on our custom types instead of Lists.

In 11.3.2 try commenting out one of the cases from fun advice(c :: TLColor). If you click *Run* nothing happens, however if you change it to *Type-Check and Run* CPO will error that the cases expression should be able to handle all possible variants of TLColor.

## Assignment 2: Nile

After this assignment we are caught up to where students are currently at in the lectures. Here we are restricted from using list built-ins, so member, append, or distinct you'll have to write these yourself. The instructor considers rewriting library functions as drill exercises since programming is learn by doing. We need to fill in recommend and popular-pairs according to the spec we're given. Don't use their 2018 starter template, there's some kind of bug where accessing the content of a recommendation doesn't work. Write your own or use this:

provide * provide-types * data Recommendation<A>: | recommendation(count :: Number, content :: List<A>) end data File: | file(name :: String, content :: String) end fun recommend(title :: String, book-records :: List<File>) -> Recommendation: doc: ```Takes in the title of a book and a list of files, and returns a recommendation of book(s) to be paired with title based on the files in book-records.``` recommendation(0, empty) end fun popular-pairs(records :: List<File>) -> Recommendation: doc: ```Takes in a list of files and returns a recommendation of the most popular pair(s) of books in records.``` recommendation(0, empty) end

Examples to help understand what the assignment is asking for, you can run this to see if the tests are at least well formed:

fun recommend(title :: String, book-records :: List<File>) -> Recommendation: doc: ```Takes in the title of a book and a list of files, and returns a recommendation of book(s) to be paired with title based on the files in book-records.``` recommendation(0, empty) where: r1 = file("a", "1984\nCrime and Punishment\nHeaps are Lame\nLord of the Flies") r2 = file("b", "1984\nHeaps are Lame\nLord of the Flies") r3 = file("c", "1984\nCrime and Punishment\nHeaps are Lame\nCalculus") r4 = file("d", "Crime and Punishment\n1984\nLord of the Flies") input = [list: r1, r2, r3, r4] recommend("1984", input) is recommendation(3, [list: "Heaps are Lame", "Crime and Punishment", "Lord of the Flies"]) recommend("Heaps are Lame", input) is recommendation(3, [list: "Crime and Punishment", "1984", "Lord of the Flies"]) recommend("War", [list: file("q", "War\nPeace")]) is recommendation(1, [list: "Peace"]) recommend("PAPL", input) is recommendation(0, empty) # These are their tests from 2016 Nile writeup: f1=file("alist.txt","1984\nAnimal Farm\nHigurashi\nLife of Pi") f2=file("blist.txt","Animal Farm\nHigurashi\nLife of Pi") f3=file("clist.txt","1984\nHeart of Darkness") input2 = [list: f1, f2, f3] recommend("1925", input2) is recommendation(0,[list: ]) recommend("1984", input2) is recommendation(1,[list: "Animal Farm","Higurashi","Life of Pi","Heart of Darkness"]) end

One property to test would be to reverse the procedure. Given a list of titles, produce a file.content string and then compare it to your function that produces titles from a file. You solve this assignment the same way we did DocDiff going through the design recipe.

#### popular-pairs()

Using the 2019 Nile assignment spec:

data BookPair: | pair(book1 :: String, book2 :: String) end input = [list: file("alist.txt", "1984\nAnimal Farm\nHigurashi\nLife of Pi"), file("blist.txt", "Animal Farm\nHigurashi\nLife of Pi"), file("clist.txt", "1984\nHeart of Darkness")] # recommend("1984", input) returns: # recommendation(2, [list: "Animal Farm", "Higurashi", "Life of Pi"] recommendation(2, [list: pair("Animal Farm","Higurashi"), pair("Animal Farm", "Life of Pi"), pair("Higurashi", "Life of Pi")])

Example writing helps here again to figure out how you should pair the results. Notice that Recommendation<A> is polymorphic, so we can change List<String> to List<BookPair> and get Recommend<BookPair> without having to alter the data definition.

Something to try, think about testing the properties of pairing: a recommendation of 3 titles should have 3 pairings, 4 titles has 6 pairings, 5 titles has 9 pairings.

## Lecture 4 Big-O

We caught up to the class, you can compare dates of lectures with dates of due assignments. Watching the Fri 9/14/18 lecture on performance. The setup in the beginning of the lecture is to distinguish the notation in O(n) as being a function that maps from n -> steps to complete computation. Another explanation is here in *ambiguities in mathematics*.

The constant mentioned at the end of the lecture in the formal notation is further exampled here. It's an estimation constant that bounds f, so no matter what input to f(k), a constant c times g(k) will be bigger or equal to it. What times g(k) is bigger or equal to f(k) -> 5k + 4:

- f(k) -> 5k + 4:
- f(1) -> 5(1) + 4 = 9
- f(2) -> 5(2) + 4 = 14
- f(3) -> 5(3) + 4 = 19
- f(10) -> 5(10) + 4 = 54

- g(k) -> k:
- g(1) = 1
- g(2) = 2
- g(3) = 3

The constant c, try 10 as suggested in the lecture for f([k -> 5k + 4]) is \(\le\) (10 * g([k -> k])):

- f(1) is 9
- g(1) -> 10 * 1
- f < g

- f(2) is 14
- g(2) -> 10 * 2
- f < g

- f(10) is 54
- g(10) -> 10 * 10
- f < g

Big O notation is here to act as a suppression of information, you don't care about the constants and almost never say exactly what they are, or even the value of f(n) or g(n) they both stand for a quantity that isn't explicity known so just placeholders where you imagine inputs to the maximum of the computational model's resources so very large but usually not infinitely large. Big-O can also be thought of a set of functions of order n, for example O(n) is the set of functions who's growth rate is linear, hence why a specific function like f(n) is said to be 'in O(n)', the set of linear functions. You can also think of it as a type, the type of function(s) who's growth rate is O(n).

To recap this lecture: we need a model of computation in order to begin to do analysis of algorithms, a simple one that hands out +1 cost for every operation in the syntax (not the real underlying operations happening in the interpreter/compiler) is crude but since we can ignore constant factors it's fine for worst case estimation, we use Big-O because it abstracts across multiple machines with different hardware, it has familiar and convenient arithmetic like O(n) + O(m) is O(n + m) but still O(n) or O(n) * O(n) is O(n * n) or O(n^{2}).

### PAPL Chapter 17 *Predicting Growth*

17.5 *structural recursion* can be described as you have built up a structure like say a list, adding n + 1 elements and structural recursion is taking that structure apart in the same way you built it up, which for a list is n - 1 elements at a time. *Generative* recursion rearranges a problem into a set of new generated problems, basically recursion that ignores structure. Quick sort is an example of generative recursion you break on a pivot and now have 2 lists plus the pivot, that's not how you assembled that list structurally (there are lectures for quick sort shortly).

The Tabular Method, is not necessary as you will read in 17.8 since we can hand count these functions like we learned in the lecture. The 'question' refers to the conditionals in the cases statement like empty? link? and the answer is what that field in the cases statement evaluates to, like 0 for empty length or the recursive call in link(f, r).

"*Let's consider the first row. The question costs three units (one each to evaluate the implicit empty-ness predicate, l, and to apply the former to the latter)*" which is a very difficult way of saying: calling the function is +1 cost, opening up l for cases is +1, considering empty variant is +1, so 3 in total and once more if the list is actually empty which you should remember from lecture 4 Fri/9/14/18. Second row cost calculation is different from the lecture (none of these tiny costs matter of course as we are for now only doing worst-cast analysis).

The next question is link(f, r) so count: link +1, f +1, rest +1, 2 more for the addition operation, and +1 to call len() again, so total 6 for the 'answer' though the lecture we saw is +1 for link, +1 for addition, and plus (k - 1), so in the lecture len() is total 5k + 4, here it's 12k + 4. You can do either this tabular method or you can do the adhoc assign +1 cost to anything that looks like an operation that we did in the lecture and you will end up with O([k -> k]) anyway since we don't care about constants in this course yet.

17.7 We've seen this notation in lectures, they are lambda functions to remove ambiguities of math notation. 17.8 recall from lectures: 'there exists some constant C, for all inputs n in the set of natural numbers to the function f(n) such that f(n) is less or equal to this same constant C multiplied by some other function g(n) so that f() is bounded by g()'.

17.8 Let's find the smallest constant for 13k + 4 \(\le\) ck^{2}. If k = 1 then c needs to be 17 or larger. If k = 2 then we have 30 \(\le\) 17(2^{2}) which is true, and also true for k = 3, k = 4, if k = a trillion then 13(1 trillion) + 4 is still less than 17(1 trilllion * 1 trillion) in fact for inputs bigger than 13 the constant c becomes irrelevant. Big theta is then discussed briefly as the family of functions that grow at the same speed up to a constant. Reading PAPL, if you want to understand the math notation right click it, or if on a phone press the notation, in most browsers you should get a menu popping up from MathJax to show the TeX code to see exactly what it is.

17.9 *Combining Big-oh Without Woe* two functions in O(n) run consecutively (one after the other) are O(n + n) or O(2n) which is O(n) since constants don't matter, like f(5k + 4) is really f(k). A function f(n) that invokes z(n) in each of it's steps is O(n x n) or O(n^{2}).

### 17.10 Solving recurrences

T(n) means time or steps to complete a computation. The first recurrence example of T(k) = T(k - 1) + c think of the lecture when we went through length(), which was a constant amount of 5 steps per input, plus everything that came before it, so 5 + (k-1) + 4 with the + 4 being the empty case, that's the same as T(k - 1) + c. Here \(c_0\) or \(c_1\) are used to show the base case is either 0 or 1 (the empty case). If you want go through this set of YouTube examples on recurrence relations to see what a recursion tree looks like.

The next recurrence is T(k-1) + k, not a constant. To understand the notation: \(T(0) + (k-(k-1)) + (k-(k-2)) + \cdots + (k-2) + (k-1) + k\) plug in a value for k:

- T(4) = T(0) + T(4-(4-1)) + T(4-(4-2)) + T(4-1) + T(4)
- 0 + 1 + 2 + 3 + 4
- 4(4+1)/2 = 10 is the closed form solution and is in O(\(n^2\)) since n(n + 1)/2 = (n
^{2}+ n)/2.

2T(n) notation can mean there are two recursive calls or double whatever T(n) is, and since this is a math expression you can always move it to the other side by dividing both sides by 2. The last recurrence relation in this chapter is detailed in this YouTube example.

#### Proof by induction

There's an induction exercise at the end of the chapter, what you are proving is the asymptotic approximation or big-O bound of T(n), for example here is the solution to T(k) = T(n - 1) + c proving T(n) \(\le\) n since the book PAPL claims it is in O(n) bound. There are YouTube tutorials and other university materials but generally this is covered in algorithm design books/courses.

## Lecture 5 Insertion Sort Reccurrence

We're watching the Mon9/17/18 lecture. The first 30mins or so of the recording doesn't have video, but you can pause the video around 30:14 in, open the video again to hear the audio, and see everything on the board they are working on (sort and insert). He starts out talking about the invariant of insert, that it's length should always be k+1 because it is inserting a new value. The second topic is about how the size argument of a list doesn't help you with incorrect types, if you have [list: [list: empty]], that's still size = 1 even though it's empty.

The if-statement you take the worst complexity branch, assuming worse-case analysis. The actual cost analysis of insert() starts around 20mins in because you want to resolve the dependencies of sort() by figuring out the complexity class of insert() first. A link operation link(f, r) takes a constant amount of time regardless of the size of the input list, because it doesn't make a copy of the list. The if statement is 1 + (worst case branch).

On the right hand of the board:

- Tinsert(0) = \(c_0\) (constant steps for empty, or base case)
- Tinsert(k) = c + Tinsert(k - 1)
- = c + c + c + … \(c_0\)
- = T(k) = kc + \(c_0\).

Remember we're calling insert(f, sort(r)) on the rest of the list, so it's k - 1.

- Tsort(0) = \(c_0\) or a constant amount of steps
- Tsort(k) = Tsort(k - 1) + Tinsert(k - 1)c + \(c_0\)
- = Tsort(k - 1) + kc
- We've seen this already, T(k - 1) + k is the series 1 + 2 + 3 + 4+…+(k - 1) + k or n(n + 1)/2 or \(O(n^2)\)
- Insertion Sort is \(O(n^2)\) worst case, and O(n) + O(1) in the best case because you still have to call sort(r) and do linear work exploding that n sized list out before you can do the O(1) operation by insert that appends to the beginning of the list and returns.

### More Recurrences

Before we look at the big-O lab let's learn some more about recurrences so we 'can feel them in our bones' as suggested by Prof K in the last lecture. I'm going to use Part V *Recurrences* from MIT's Mathematics for Computer Science(mcs) free book, starting on page 993 and read only up to page 1001 as cs19 will cover the analysis later (Akra-Bazzi formula/asymptotic solution) in future lectures. For now we're going to read the intro of the Tower of Hanoi, and the steps they do to solve the recurrence for merge sort.

There are lectures for the recurrence chapter of mcs, the inductive proof in the recorded lecture for the Tower of Hanoi recurrence is slightly different than the pdf:

- Our original recurrence: \(T_n\) = 2\(T_{n-1}\) + 1
- Inductive Hypothesis(IH)/closed form solution: \(T_n\) = \(2^n\) - 1
- Base Case: \(T_1\) = \(2^1\) - 1
- Inductive step: \(T_{n+1}\) = 2\(T_{(n-1) + (n+1)}\) + 1 or 2\(T_n\) + 1
- "Suppose \(T_n\) = \(2^n\) - 1. Then the successor of \(T_n\) is \(T_{n+1}\) = \(2^{n+1}\) - 1"
- Now via substitution plug in our inductive hypothesis, since we declared \(T_n\) = \(2^n\) - 1 wherever you see \(T_n\) insert that hypothesis:
- \(T_{n+1}\) = 2\(T_n\) + 1
- \(T_{n+1}\) = 2(\(2^n\) - 1) + 1 (substitution)
- \(T_{n+1}\) = \(2^{n+1}\) - 2 + 1 (distributive law)
- \(T_{n+1}\) = \(2^{n+1}\) - 1

In mcs (the pdf text), for their IH they assumed that \(T_{n-1}\) = \(2^{n-1}\) - 1, so instead of substituting for \(T_n\) they substituted \(T_{n-1}\):

- Original recurrence: \(T_n\) = 2\(T_{n-1}\) + 1
- Assume/IH: \(T_{n-1}\) = \(2^{n-1}\) - 1
- Inductive step (looking at the left side) from \(T_{n-1}\) to \(T_{n++}\) becomes \(T_n\), and we already know the value: \(T_n\) = 2\(T_{n-1}\) + 1
- By substitution replace all occurrences of \(T_{n-1}\) with IH:
- \(T_n\) = 2(\(2^{n-1}\) - 1) + 1
- \(T_n\) = \(2^n\) - 2 + 1
- \(T_n\) = \(2^n\) - 1

In *Merge Sort*, \(T_n\) describes two recursive calls, with the total input divided in half between them, plus another recursive call to compare each list and produce the smaller value (merging). The plug and chug method they use is self explanatory until Step 3 where they introduce k = log n in order to get their base case of \(T_1\) = 0. When you see \(2^k\) anywhere in the recurrence replace it with \(2^{\log n}\) which is n because \(a^{\log b}\) = \(b\). What you should take away from step 3 is how they manipulate the recurrence in order to produce a known value of \(T_n\) such as \(T_0\) or \(T_1\) in this case.

Step 3, after k = log n has been substituted for all k:

- \(2^{\log n}T_{n/2^{\log n}}\) + log n * n - \(2^{\log n}\) + 1
- Remember \(2^{\log n}\) = n

- n\(T_{n/n}\) + n * log n - n + 1
- n\(T_{n/n}\) is n\(T_1\) and in step 3 \(T_1\) was already declared to be 0

- n(0) + n * log n - n + 1
- n log n - n + 1 or \(T_n \in O(n \log n)\)

### Return to 17.10 Recurrences

Let's go back and read 17.10 *Solving Recurrences* again now that we know how logarithms are used to produce a known base case value, like \(T_1\). The first is T(k) = T(k/2) + c or [k -> log k] recurrence, we are looking at binary search. For this binary search example, note the trick to make this recurrence \(T_1\) and look up how logarithms are defined for \(log_2\) k.

The next example T(k) = T(k/2) + k or [k -> k], again they have used the \(2^{\log k}\) trick to produce k/k, and they have also used the distributive property to factor out k producing k(1/k + …+ 1/4 + 1/2 + 1) which ends up simplifying to base case 1 constant + 2k. The [k -> k * log k] example they get rid of \(2^k\) by replacing it with \(2^{\log k}\) which is k and this is multiplied by T(1) which is \(c_1\). The rest of the recurrence is k \(\log_2\) k which is exampled and explained in this article on power rule. \(\log_2 k^k\) is k \(\cdot\) \(log_2\) k or k.

### Practice

The only way to learn this is through practice. Obtain the book *Concrete Mathematics* by Don Knuth et al., second edition from library genesis. Let's do the first chapter *Recurrent Problems*. In my copy the illustrations have been removed from the digital copy, a complete visualization of the tower of Hanoi problem is here.

Equation 1.1 all recurrences are defined this way, the base case and the n case. Base case is the empty case in Pyret if traversing a list, it's whatever stops the recursion since notice T_{n} is defined in terms of itself where T_{0} is just 0.

- T
_{1}= 2(T_{1-1}) + 1- T
_{1}= 2(T_{0}) + 1 - T
_{1}= 2(0) + 1 or 1

- T
- T
_{2}= 2(T_{2-1}) + 1- T
_{2}= 2(T_{1}) + 1 - what is T
_{1}? It's 1: - T
_{2}= 2(1) + 1 or 3

- T

There's 26 exercises for this chapter all with solutions at the back of the book. You don't have to correctly solve every one, think about the problem, look at the chapter text, play around with it trying small cases as the book suggests, if after x minutes you still can't figure it out look at the solution and repeat, allowing for more time with each problem. Many of these problems are recycled into lectures, for example the first exercise falsely proving horses are the same color with induction is in the MIT *Mathematics for CS* lectures on induction.

### Lab: Big-O 1

Let's go through the lab here.

First task, why can we disregard constants, we already know why, because g(x) = 5x^{2} + x is T(k) = T(k-1) + n which is O[k -> k^{2}]), since we only care (in this class anyway) about finding an upper bound for the worse case, we don't care about the constant 5 or x as they are both upper bound by x^{2}. We've seen how when inputs grow large, x^{2} grows so much that any constants are insignificant numbers (for worse case analysis of course).

Second task in the chapter *Notation* the brackets are needed if using the [k -> k] notation and is why the first example of O(w -> w^{3}) is wrong, it should be f \(\in\) O([w -> w^{3}]). There also shouldn't be any constants in the notation like [m -> 3m]. The example of f \(\in\) O(h^{2}) should likely be f \(\in\) O([h -> h^{2}]) or f(h) \(\in\) O(h^{2}).

In 3 *The Tabular Method* we finally get an explanation what the terms 'question' and 'answer' mean, this should probably be in the book. The second row of the cases statement we already read in the book for len function, and saw it in lectures. It really doesn't matter how you count the constant units either for worse case analysis, in lectures we chose link(f, r) as +1 cost, the addition as +1 cost, and added it to 3k + 4 so total 5k + 4. If you carefully counted each operation in link and came up with 100k + 4 it'd still be the same worst case analysis of len() is in O([k -> k]). Looking back at chapter 17 T(k) = T(k - 1) + c recurrence is the same as len().

4 *Big-O with less math* explains how to estimate complexity a little more clearly than the book does.

- 1: This could be [k -> k * log k] because for each element we make a call to another function so O(F X G). This also fits the explanation of f(x) = x * log x where the recursive call is logarithmic but input stays the same.
- 2: A linear function k calls another linear function 10x, then calls another linear function so [n -> n + m] or [n -> n]
- 3: A linear function calls a quadratic function on every element, so [k -> k * m
^{2}] which could be [k -> k^{3}] - 4: At every linear step, remove is called on an entire list which must linearly search through list l and remove f, then on the recursive call list l possibly shrinks by 1 element. I'm guessing this is O(F X G) again it is almost the exact definition of quadratic in the lab table.

Let's figure out 5, a function that takes as input 2 lists and counts their length. This translates to two nested cases:

fun dbl-count(list1, list2): cases (List) list1: | empty => 0 | link(f1, r1) => cases (List) list2: | empty => 0 | link(f2, r2) => 1 + dbl-count(r1) + 1 + dbl-count(r2) end end end

Looking at chapter 17 in the book, this is T(k) = 2T(k - 1) + c. Two recursive calls each doing a constant amount of work. Let's confirm by watching T(n)=2T(n-1)+1. Indeed this is O([k -> k^{k}]).

- 6: My guess is that this is O(F X G X H
^{2})) or O([k -> k^{4}]) - 7: Recall from lectures that you figure out complexity cost of dependent functions from the function dependencies up, so we want to find the cost for rem-helper() first. It takes a size k input, and an element from rem-dups. We have an if-branch so we take the worst-case branch, and both branches involve going linearly through the entire list and looking at every element. We'll just naively calculate some costs: We go into the function, +1, we open the list, +1, we look at empty case, +1, it is empty and we return empty so 3k + 1. If it's not empty +1 for link operation, +1 to compare f to c, and worst case I guess is +1 to link f then + all the previous operations all over again until empty. So this is k + constant or O([k -> k]). Let's look at rem-dups. It's doing linear work up until it calls rem-helper() on every single element. The definition of Quadratic: 'appears when comparing each element of a data structure to all other elements in that structure' seems to fit this situation, so O([k -> k
^{2}]). Looking in the book, we see T(k) = T(k - 1) + k describes this program as well.

5.1 *Solving Recurrences* recall \(c_0\) represents the base case T(0) which is some constant amount of work.

Another task, find the closed form of T(k-2) + 1 if k > 1. There's a lecture for this. Second recurrence relation: 2T(k-1) + 1 for k > 0 we already did, it's O([k -> k^{k}]). The extra challenge relations are all in the book and exist in this guy's playlist under 2.1.1 - 2.3.3.

We will see this many times over in the future to clear up any misunderstandings but you should have an idea of what worst-case complexity means and how to at least guess what it is by looking at code.

## Lecture 6 Quicksort

We're watching the lecture titled Wed9/19/18. Reminder these lectures can be viewed or downloaded in 1920x1080 size so you can zoom with whatever video player if needed but seeing the fine details isn't important he is showcasing the structure of linear [k -> k] vs quadratic [k -> \(k^2\)].

He remarks that their quicksort implementation is really a filter, and we should try quicksort using a filter:

fun combine(lt, p, gt): lt.append([list: p]).append(gt) end fun qsort(l): cases (List) l: | empty => empty | link(pivot, r) => combine( qsort(r.filter(lam(n): n < pivot end)), pivot, qsort(r.filter(lam(n): n >= pivot end))) end end check: qsort([list: 2, 1, 3, 4, 5]) is [list: 1, 2, 3, 4, 5] qsort([list: 100, 0, 1, 343]) is [list: 0, 1, 100, 343] qsort([list: 0, -1, 100]) is [list: -1, 0, 100] qsort([list: ]) is ([list: ]) qsort([list: 8, 1, 1, 9, -1]) is [list: -1, 1, 1, 8, 9] end

He ends the lecture trying to convey a bigger idea: use the structure of the data to come up with a simple solution, confirm the inefficient solution is correct, optimize it according to the bound you want.

## Assignment 3: Sortacle

This assignment we're building a testing oracle.

First function is generate-input(n :: Number) -> List<Person>, it consumes a number, and produces a list of length n of type Person. Since this function is going to generate random person.name strings we can't directly match the output in a test case, but we can still confirm it has the correct properties of [list: person(name, number)]:

data Person: | person(name :: String, age :: Number) end fun generate-input(n :: Number) -> List<Person>: doc:"Generate random n size list of person(name, age), name is length 4, age is 1 - 100" ... where: # confirm size L.length(generate-input(5)) is L.length([list: 1, 2, 3, 4, 5]) map(lam(x): (string-length(x.name) > 0) is true end, generate-input(5)) # make sure the age > 0 map(lam(x): x.age > 0 is true end, generate-input(5)) # confirm name is ASCII code points from 97 - 122 # string-to-code-points() returns a list, so nested map map(lam(x): map(lam(y): ((y >= 97) and (y <= 122)) is true end, string-to-code-points(x.name)) end, generate-input(3)) end

Pyret has a built-in function for generating a list repeat() which we will need to write ourselves since this assignment doesn't allow using built-ins outside of the higher order functions and sort() or sort-by() though I used some other list built-ins here and in the test cases since we already rewrote them in the Nile assignment. One way we could architect this is recursing on n, linking a list together. So num-random() gives us something resembling a random name, see this article, code points I guess could be between 97 and 122 or 'a' and 'z' though you could just leave this random and fuzzy test with all unicode code points since our sorters could hold foreign unicode names or symbols or anything:

import lists as L data Person: | person(name :: String, age :: Number) end fun generate-input(n :: Number) -> List<Person>: doc:"Generate random n size list of person(name, age)" fun random-name(s :: Number, name-string :: String) -> String: doc: "Generate a random string between ASCII 97 - 122 of length s" if s < 1: name-string else: single = num-random(33) + 90 if (single >= 97) and (single <= 122): random-name(s - 1, string-from-code-point(single) + name-string) else: random-name(s, name-string) end end where: string-length(random-name(5, "")) is 5 string-length(random-name(1, "")) is 1 end fun helper<T>(num :: Number, acc :: List<Person>) -> List<Person>: doc: "Consumes n, init acc and returns [list: person(name, age)] n times" if num < 1: acc else: name-length = num-random(25) + 1 helper(num - 1, link(person(random-name(name-length, ""), num-random(100) + 1), acc)) # acc on first call is a placeholder here for empty # this will return ...link(item, link(item, orig-acc)) end end # preconditions if n < 1: raise("Input must be greater than 0") else: helper(n, empty) end where: # confirm size L.length(generate-input(5)) is L.length([list: 1, 2, 3, 4, 5]) map(lam(x :: Person): (string-length(x.name) > 0) is true end, generate-input(2)) # make sure the age is within 1 - 100 map(lam(x :: Person): x.age > 0 is true end, generate-input(2)) # confirm name is ASCII code points from 97 - 122 map(lam(x :: Person): map(lam(y): ((y >= 97) and (y <= 122)) is true end, string-to-code-points(x.name)) end, generate-input(2)) # confirm input is > 0 generate-input(0) raises "Input must be greater than 0" end

Now we have what the assignment is looking for:

>>> generate-input(2) [list: person("vwqukq", 3), person("rljhrsijqcwyuglwjtnhc", 48)]

Next is a sort validation function that consumes two sorted lists of type Person, and compares if they are both the same returning a boolean true or false. To compare if the first sorted input list is equal to the correct sorted list, we could map2 over both lists to do equality tests, then fold over the list map2 returns comparing each entry with 'true'.

Note this is **inefficient** as a cases on List can break on the first instance of false instead of continuing through the entire linear list:

fun is-valid(test-input :: List<Person>, correct-input :: List<Person>) -> Boolean: doc: "Consumes test-input and compares to correct-input a correct-sorter() list" fun length-test(test-input-a :: List<Person>, correct-input-a :: List<Person>) -> Boolean: doc: "Test if both lists are the same length" (L.length(test-input-a)) == (L.length(correct-input-a)) end fun name-test(test-input-b :: List<Person>, correct-input-b :: List<Person>) -> Boolean: doc: "Test if any names have been altered" correct-sorted-names = correct-sorter(test-input-b) names = map2(lam(n1 :: Person, n2 :: Person): (n1.name == n2.name) end, correct-sorted-names, correct-input-b) L.foldl(lam(a :: Boolean, b :: Boolean): a and b end, true, names) end fun age-sort-test(test-input-c :: List<Person>, correct-input-c :: List<Person>) -> Boolean: doc: "Test if sorted by age correctly" age-test = map2(lam(n3 :: Person, n4 :: Person): (n3.age == n4.age) end, test-input-c, correct-input-c) L.foldl(lam(c :: Boolean, d :: Boolean): c and d end, true, age-test) end fun age-and-name-sort-test(test-input-d :: List<Person>, correct-input-d :: List<Person>) -> Boolean: doc: "See if the names and ages match, this will fail occasionally for qsort on duplicate ages" both-test = map2(lam(n5 :: Person, n6 :: Person): (n5.name == n6.name) and (n5.age == n6.age) end, test-input-d, correct-input-d) L.foldl(lam(e :: Boolean, f :: Boolean): e and f end, true, both-test) end # return the results if length-test(test-input, correct-input) and name-test(test-input, correct-input) and age-sort-test(test-input, correct-input) and age-and-name-sort-test(test-input, correct-input): true else: false end where: # some corner cases, omitted many tests is-valid(empty, empty) is true is-valid([list: person("a", 31)], empty) is false is-valid([list: person("", 31)], [list: person("a", 31)]) is false end

Now we go about writing multiple buggy implementations of sorting algorithms and test these functions with oracle :: (List<Person> -> List<Person>) -> Boolean ie: oracle(insertion-sort)) or oracle(quick-sort):

fun combine(lt, p, gt): doc: "Used by incorrect-quicksort" lt.append([list: p]).append(gt) end fun qsort(l :: List<Person>) -> List<Person>: doc: "Quicksort from lectures" cases (List) l: | empty => empty | link(pivot, r) => combine( qsort(r.filter(lam(n): n.age < pivot.age end)), pivot, qsort(r.filter(lam(n): n.age >= pivot.age end))) end end fun incorrect-qsort(l :: List<Person>) -> List<Person>: doc: "Sabotaged quicksort" cases (List) l: | empty => empty | link(pivot, r) => combine( incorrect-qsort(r.filter(lam(n): n.age < pivot.age end)), pivot, incorrect-qsort(r.filter(lam(n): n.age > pivot.age end))) end end # empty case test fun empty-sort(p :: List<Person>) -> List<Person>: empty end # deletes the names fun name-screwed(w :: List<Person>) -> List<Person>: correct = L.sort-by(w, lam(p1, p2): p1.age < p2.age end, lam(p1, p2): p1.age == p2.age end) (map(lam(x): person("", x.age) end, correct)) end fun oracle(f :: (List<Person> -> List<Person>))-> Boolean: doc: "Testing oracle for sorting lists of type Person" random-input = generate-input(num-random(30) + 5) is-valid(f(random-input), correct-sorter(random-input)) where: oracle(qsort) is true oracle(incorrect-qsort) is false oracle(name-screwed) is false oracle(empty-sort) is false end

These tests won't always pass, because sometimes incorrect-qsort() will produce the correct output, it's only on edge cases with duplicate ages that it won't pass our oracle test case. Quicksort will also sometimes produce a different sort sequence for multiple same ages or similar names with different ages that were correctly but differently sorted than the sequence correct-sorter() uses. This is a poor solution where our oracle just fails without any information, but I considered it good enough for the purposes of this assignment since we aren't being graded.

### Check your work against the official solution

Write your solution first, then read this pdf. He writes about how this course encourages property-based testing which since we've been doing it is a way to test properties instead of values and there is a thorough writeup about this sortacle assignment what they graded for and exactly what would fail their tests.

We find out they didn't really care about the generator only that it is a good drill exercise, they primarily tested is-valid extracting only that function to test. They broke the testing down into specific properties is-valid should have and exactly how they implemented their own tests for these properties.

My implementation passed most of their grading tests, as my tests checked if outputs were not the same as inputs (missing names/ages/changed sizes) and although my tests caught whenever correct-sorter hit a corner case like quicksort where duplicate ages could have different order if sorted by age, I didn't handle this false is-valid with additional checks and since these sorts were still valid my implementation would be useless for automated testing. I also sanitized generate-input so did not check for out of bounds ages in is-valid so failed that too.

The grading tests for oracle assignment (actually, all the assignments) are also in the same paper, don't read it until after you try your own implementation to see if you are picking up these property-based testing techniques but more or less, the solutions are there.

### Logic for Systems

The techniques they used in the paper are taught in their *Logic for Systems* class which I partially archived with some recorded lectures if you're interested in this kind of logical thinking about programs.

Partial archive of 195y if you're interested before recorded lectures disappear magnet:?xt=urn:btih:3187a9951b43b85771a975653680a4a8d9faf61d&dn=195y Uses the book 'Software Abstractions' by D Jackson @ MIT https://mitpress.mit.edu/books/software-abstractions-revised-edition

## Assignment 4: Data Scripting

This assignment is yet more practice that mirrors what a typical developer would have to do unravelling data out of nested structures and writing quick basic tasks.

#### *is-palindrome() and sum-largest()

Straight forward to figure out using map/map2/fold..

#### adding-machine()

Notice two consecutive 0's in the test case which will screw up your list/empty cases if you're linking on occurrences of 0 unless you filter the returned accumulator.

#### bmi-report()

Template is:

data PHR: | phr(name :: String, height :: Number, weight :: Number, heart-rate :: Number) end data Report: | bmi-summary(under :: List<String>, healthy :: List<String>, over :: List<String>, obese :: List<String>) end fun bmi-report(phrs :: List<PHR>) -> Report: ... where: bmi-report([list: phr("eugene", 2, 60, 77), phr("matty", 1.55, 58.17, 56 ), phr("ray", 1.8, 55, 84), phr("mike", 1.5, 100, 64)]) is bmi-summary([list: "eugene", "ray"], # under [list: "matty"], # healthy [list: ], # over [list: "mike"] # obese ) end

My solution was to write a helper function bmi(p :: PHR) -> Number and then filter each list for under, healthy, over, obese. Return bmi-summary by mapping over those filtered lists for lam(x): x.name, try adding more test cases like the empty case, or multiple same names. Assignment claims we can assume no entry will be zero so we don't need a test case for division by zero.

#### data-smooth()

You could want tests for empty, one element, two elements, and the given assignment example:

fun data-smooth(phrs :: List<PHR>) -> List<Number>: doc: "Smooth heart rate data" ... where: # given example from assignment writeup data-smooth([list: phr("eugene", 2, 60, 95), phr("matty", 1.55, 58.17, 102), phr("ray", 1.8, 55, 98), phr("mike", 1.5, 100, 88), phr("a", 2, 2, 105)]) is [list: 95, 295/3, 96, 97, 105] # empty test data-smooth(empty) is empty # one element test data-smooth([list: phr("spock", 3, 2, 1)]) is [list: 1] # two element test data-smooth([list: phr("spock2", 1, 2, 90), phr("spock3", 3, 4, 91)]) is [list: 90, 91] end

#### frequent-words()

I wrote this template:

# optional data Freq: | frequency(word :: String, count :: Number) end fun frequent-words(words :: List<String>) -> List<String>: ... where: frequent-words([list: "silver", "james", "james", "silver", "howlett", "silver", "loganne", "james", "loganne"]) is [list: "james", "silver", "loganne"] frequent-words([list: "a", "a", "a", "a", "b", "b", "c"]) is [list: "a", "b"] # corner case, all words are only 1 frequency so return sorted frequent-words([list: "james", "so"]) is [list: "so", "james" ] frequent-words(empty) is empty end

The assignment wants a list returned of words, in descending order of freq count, sorted by smallest string length if there's a tie. If the highest frequency count is greater than 1, then you return those words otherwise return the list sorted if there are no freq greater than 1 is how I interpret the assignment.

#### daily-max-for-month()

We're asked to take [list: 20151004, 150, 200, 175, 20151005, 0.002, 0.03, 20151007, 130, 0.54, 20151101, 78] and turn it into a daily maximum [list: max-hz(20151004, 200), max-hz(20151005, 0.03), max-hz(20151007, 130)] if the searched for month is October. Our template:

data Report: | max-hz(date :: Number, max-reading :: Number) end fun daily-max-for-month(sensor-data :: List<Number>, month :: Number) -> List<Report>: ... where: input = [list: 20151004, 150, 200, 175, 20151005, 0.002, 0.03, 20151007, 130, 0.54, 20151101, 78] daily-max-for-month(input, 10) is [list: max-hz(20151004, 200), max-hz(20151005, 0.03), max-hz(20151007, 130)] end

## Lecture 7 Trees

We're watchin Fri9/21/18 lecture on 2D tree shaped data w/guest lecturer Tim Nelson who is an expert in software verification. An interesting paper from his page authored with Prof K you should read is *The Human in Formal Methods*. In it they write about porting the method of example writing we've been doing to this domain. A common learning mistake that students make is talked about, in that they write programs before they have understood the problem, as a result they 'solve' the wrong problem, which misdirected from the learning goals. If you read the paper linked in the first DocDiff assignment you already know this, that you should never jump into writing the solution first or else you are wasting your time.

The mutual recursive datatypes, to understand them write out your own tree:

a / \ b none / \ c none \ none a has child b and none b has child c and none c has none

- start with a:
- person("a", No-children)
- add in b:
- person("a",
**child(person("b", No-children)**, No-children)) - add in c:
- person("a", child(person("b",
**child(person("c", No-children)**, No-children)), No-children))

### PAPL Chapter 13 Recursive datatypes

The recursive data definitions that we saw in Lecture 7, and they hand step through a recursive function. Seems like this should be in chapter 1. The design recipe template from HtDP book for recursive data definitions is also shown now you have the full design recipe for future assignments.

## Lecture 8 Sets

We're watching Mon9/24/18 lecture on sets. This is a design lecture, given something that has no notion of order, how do you design it's data structure representation.

When he shortens insert() to equal link, try it in the pyret repl. The end of this lecture discussing the big-O complexity of both representations he points out the inefficient representation is better depending on what you care about, like appending a log you often don't look at.

### PAPL Chapter 12 & 18.1 Sets

We haven't covered 18.2 yet so just reading 18.1. The lecture covered everything in these 2 chapters and goes into further detail about choosing how to build a representation.

There's a link to the glossary, interesting definitions in there such as links to a paper on coalgebras/coinduction. In the size() recurrence, remember the chapter *Predicting Growth* where common recurrences are listed such as T(k) = T(k - 1) + k is \([k \rightarrow k^2]\) because it's closed form solution is: \(\frac{k^2 + k}{2}\) hence T(d) = d + (d - 1) is also \([d \rightarrow d^2]\).

### PAPL Chapter 15 Oracles

This is short unfinished chapter on testing. In testing blocks we have some new binary and unary test operators we can use like *expr1 is%(function) expr2* or *expr1 satisfies (function)*. Pyret documentation has examples, is%(function) exists so you can test your very optimized new function against a slower proven correct version and ensure the outputs are the same.

## Assignment 5: Oracle

Let's build a testing oracle. There's a starter template we need to access the function they give us matchmaker() and is-hire()

# CSCI0190 (Fall 2018) provide * provide-types * import shared-gdrive("oracle-support.arr", "11JjbUnU58ZJCXSphUEyIODD1YaDLAPLm") as O ##### PUT IMPORTS BELOW HERE ############ ##### PUT IMPORTS ABOVE HERE ############ type Hire = O.Hire hire = O.hire is-hire = O.is-hire matchmaker = O.matchmaker # DO NOT CHANGE ANYTHING ABOVE THIS LINE fun generate-input(num :: Number) -> List<List<Number>>: doc: ``` generates a list of candidates or companies for a group of size num ``` empty where: generate-input(0) is empty end fun is-valid( companies :: List<List<Number>>, candidates :: List<List<Number>>, hires :: Set<Hire>) -> Boolean: doc: ``` Tells if the set of hires is a good match for the given company and candidate preferences. ``` true where: is-valid(empty, empty, empty-set) is true end fun oracle(a-matchmaker :: (List<List<Number>>, List<List<Number>> -> Set<Hire>)) -> Boolean: doc: ``` Takes a purported matchmaking algorithm as input and outputs whether or not it always returns the correct response ``` true where: oracle(matchmaker) is true end

Run the template then try to give matchmaker some inputs to see what it does:

companies = [list: [list: 1, 0, 2], [list: 0, 2, 1], [list: 0, 2, 1]] candidates = [list: [list: 1, 0, 2], [list: 1, 2, 0], [list: 0, 1, 2]] # Note matchmaker writeup: it is 0 to n-1 indexed, so 3 lists means 0 - 2 matchmaker(companies, candidates) >>[list-set: hire(2, 1), hire(1, 0), hire(0, 2)]

First Let's understand the matching algorithm, we don't have to write one ourselves. Let's look at the actual paper. Scroll down to Theorem 1: *There always exists a stable set of marriages* and look at the proof for the algorithm:

- First round:
- Companies all select their first preferred candidate
- Candidates accept provisionally, if a more preferred offer is presented in next rounds they will reject this one
- Rejected companies are put into a pool

- Second round:
- Companies in the pool with no matching then all select their second preferred candidate
- Candidates accept or reject upon preferences
- X rounds:
- Repeat, noting that no company can offer more than once to a candidate

The 2019 assignment writeup contains a few hints, such as 'be cautious about using the provided matchmaker function in your oracle as this assumes that there is only one right answer to a given instance of the stable hiring problem'. I translate this to mean there could be many permutations of the stable-hiring problem which are still correct, however the Gale-Shipley algorithm produces the exact same matching everytime if the company is making offers, and candidates rejecting. What if the roles are reversed, if it is candidate making offers and company rejecting? We have 2 inputs into matchmaker but it doesn't say which one is actually making the offers and which input is rejecting. We should try altering the inputs and see what happens:

companies = [list: [list: 1, 0, 2], [list: 0, 2, 1], [list: 0, 2, 1]] candidates = [list: [list: 1, 0, 2], [list: 1, 2, 0], [list: 0, 1, 2]] >>matchmaker(candidates, companies) [list-set: hire(2,2), hire(1,0), hire(0,1)] >>matchmaker(companies, candidates) [list-set: hire(2,1), hire(1,0), hire(0,2)]

They're different, so there is indeed more than one solution to the problem depending on if the bias is towards candidates or companies. So how are we going to test the property for any stable matching. I find an old book, entirely on the Stable Marriage problem here and on page 8 *Stability Checking* we are given a stability checking algorithm and on page 7 this is more clearly defined: For each company C, look at their preferences and check that each candidate they prefer is not matched to somebody else if candidate also prefers company C.

- Some tests to implement:
- check the hire pairs are bounded with the input
- no hire(7,1) if input is only 3 companies

- check n companies and n candidates are in n pairs
- check for duplicate matchings
- check they are stable as per the algorithm in that book

- check the hire pairs are bounded with the input

This was my good enough generate-input function for test inputs < num = 25, more than that and needs optimization, like using a dictionary datastructure or keep a sorted accumulator to check for dupes, or using filter:

fun generate-input(num :: Number) -> List<List<Number>>: doc: 'generates a list of candidates or companies for a group of size num' fun test-for-dupes(input :: List<Number>) -> List<Number>: doc: 'Checks for duplicate prefs in the list' distinct-input = L.distinct(input) if (L.length(distinct-input) < num): test-for-dupes(distinct-input + map(lam(x): num-random(num) end, repeat(num - distinct-input.length(), 0))) else: input end end fun create-list(n :: Number) -> List<Number>: doc: 'Assembles list of distinct prefs' randomized = map(lam(x): num-random(n) end, repeat(n, 1)) test-for-dupes(randomized) end if (num <= 0): # If empty, return empty empty else if num == 1: # If n, return n - 1 [list: [list: 0]] else: # repeat gives a skeleton list to map over of size num map(lam(x): create-list(num) end, repeat(num, 1)) end where: generate-input(0) is empty generate-input(-1) is empty generate-input(1) is [list: [list: 0]] L.length(generate-input(3)) is 3 # omitted a bunch of tests here end

### Check your solution

The same paper we read for Sortacle assignment there is a writeup for this one too. Once again they didn't really care that much about the input generation functions. Checking my own work they cared that is-valid tested for stability, uniqueness and completeness which my tests all covered mainly because I found that old book on the stable matchmaking problem that handed me a stability testing algorithm and from years of reading hacky software I just expected disaster so tested for dupes and incompleteness.

How they came up with these tests is a good read, they used Alloy taught in cs195y (also the book *Software Abstractions* by D Jackson as mentioned in the sortacle assignment) to produce a model where all three properties were satisfied, then had Alloy produce unstable matchings while preserving the other two properties unique and complete. It's sort of like using set theory to model the logic of your programs.

## Lecture 9 Logarithmic Growth

We're watching lecture Wed/9/26/18. This is a light lecture just explaining the intuition behind logarithmic complexity to set up balanced binary search trees. At 7:10 he explains logarithms: "A (base-10) logarithm of a number is essentially how many digits a number has. That's it". He then explains in terms of virtual machine costs, if you have a logarithmic algorithm and there is a million new runs of that algorithm you are only paying 6x the cost.

## Lecture 10 Balanced Binary Search Trees

This is lecture Fri/9/28/18. We discover the entire last lecture was just trolling and would not give us logarithmic time in terms of successfully being able to throw away half the data because the tree isn't balanced. This whole lecture is understanding how you can balance a tree.

### PAPL Chapter 18.2 Trees

All of this was covered in the lecture, including most of the exercises. There's an interesting writeup about how hashing functions work.

### Lab: Tree Traversal

Let's try the CS19 lab for trees. The first part is just thought exercises then the lab offers us stencil code for the BT tree lab sections:

# _____ _ _ # |_ _| __ ___ ___ | |_ _ __ __ ___ _____ _ __ ___ __ _| | # | || '__/ _ \/ _ \ | __| '__/ _` \ \ / / _ \ '__/ __|/ _` | | # | || | | __/ __/ | |_| | | (_| |\ V / __/ | \__ \ (_| | | # |_||_| \___|\___| \__|_| \__,_| \_/ \___|_| |___/\__,_|_| data BTree<T>: | mt | node(value :: T, left :: BTree<T>, right :: BTree<T>) end fun btree-in-order<A>(tree :: BTree<A>) -> List<A>: doc: "Returns the elements of tree in a list via an in-order traversal" ... where: nothing end fun btree-pre-order<A>(tree :: BTree<A>) -> List<A>: doc: "Returns the elements of tree in a list via a pre-order traversal" ... where: nothing end fun btree-post-order<A>(tree :: BTree<A>) -> List<A>: doc: "Returns the elements of tree in a list via a post-order traversal" ... where: nothing end

First make a tree, run it, and type 'a-tree' into pyret interpreter and click the output to see the right and left branches, or a-tree.left or a-tree.right

a-tree = node(5, node(4, node(3, mt, mt), mt), node(6, node(7, mt, mt), node(8, mt, mt))) >> a-tree.left node(4, node(3, mt, mt), mt) >> a-tree.right node(6, node(7, mt, mt), node(8, mt, mt)) >> a-tree.value 5

The lab wants us to write various traversals, here's what I wrote for btree-in-order which is basically the other two orders as well, just rearrange helper(l) + root + helper(r):

data BTree<T>: | mt | node(value :: T, left :: BTree<T>, right :: BTree<T>) end a-tree = node(5, node(4, node(3, mt, mt), mt), node(6, node(7, mt, mt), node(8, mt, mt))) # degenerate tree from PAPL chapter 17.2.2 b-tree = node(1, mt, node(2, mt, node(3, mt, node(4, mt, mt)))) fun btree-in-order<A>(tree :: BTree<A>) -> List<A>: doc: "Returns the elements of tree in a list via an in-order traversal" fun helper(n :: BTree<A>) -> List<A>: cases (BTree) n: | mt => empty | node(v, l, r) => link(v, helper(l) + helper(r)) end end cases (BTree) tree: | mt => empty | node(v, l, r) => helper(l) + link(v, empty) + helper(r) end where: btree-in-order(a-tree) is [list: 4, 3, 5, 6, 7, 8] btree-in-order(b-tree) is [list: 1, 2, 3, 4] end

Another lab where we rewrite map/filter/fold. Below when you see Prof K's lecture on knowledge transfer you'll understand why he makes us do this over and over. This is my first attempt at map, I wrote it like he writes software on the board in class thinking through the data definition:

data BTree<T>: | mt | node(value :: T, left :: BTree<T>, right :: BTree<T>) end a-tree = node(5, node(4, node(3, mt, mt), mt), node(6, node(7, mt, mt), node(8, mt, mt))) # degenerate tree from PAPL chapter 17.2.2 b-tree = node(1, mt, node(2, mt, node(3, mt, node(4, mt, mt)))) fun btree-map<A, B>(f :: (A -> B), tree :: BTree<A>) -> BTree<B>: doc: "Recursively applies f to the value of every node contained in tree" cases (BTree) tree: | mt => mt | node(v, l, r) => node(f(v), btree-map(f, l), btree-map(f, r)) end where: btree-map(lam(x): x + 1 end, a-tree) is node(6, node(5, node(4, mt, mt), mt), node(7, node(8, mt, mt), node(9, mt, mt))) btree-map(lam(x): x + 1 end, b-tree) is node(2, mt, node(3, mt, node(4, mt, node(5, mt, mt)))) end >>btree-map(lam(x): x * x end, a-tree) node(25, node(16, node(9, mt, mt), mt), node(36, node(49, mt, mt), node(64, mt, mt)))

Note that btree-fold the lab wants you to convert the tree to a list, apply f to every element and produce a value just like regular fold does with a warning about complete testing, meaning every function of regular fold should be replicated here.

## Assignment 6: Filesystem

I interpret the assignment to mean this directory tree/model:

# this is missing from my template: type Path = List<String> # you can omit size field, so c = file("name", "content") and c.size will return size of the content field text. filesystem = (dir("TS", [list: dir("Text", empty, [list: file("part1", 99, "content"), file("part2", 52, "content"), file("part3", 17, "content")]), dir("Libs", [list: dir("Code", empty, [list: file("hang", 8, "content"), file("draw", 2, "content")]), dir("Docs", empty, [list: file("read!", 19, "content")])], empty)], [list: file("read!", 10, "content")]))

Pyret allows you to write methods for datatype, here's an example for du-dir() where I wrote a .size() method:

provide * provide-types * data File: | file(name :: String, size :: Number, content :: String) end data Dir: | dir(name :: String, ds :: List<Dir>, fs :: List<File>) with: method size(self :: Dir) -> Number: doc: "Returns combined file size, file count, and sub directory count of a Dir" size(self) end end fun size(d :: Dir) -> Number: doc: "Method .size()" fun fsize(fd :: Dir) -> Number: doc: "Returns file size, and number of files in a Dir" fd.fs.foldl(lam(f :: File, acc :: Number): 1 + f.size + acc end, 0) end fsize(d) + map(lam(x :: Dir): 1 + size(x) end, d.ds).foldl(lam(elt, acc): elt + acc end, 0) end check: a = dir("TS", [list: dir("Text", empty, [list: file("part1", 1, "content")])], empty) b = dir("Libs", [list: dir("Code", empty, [list: file("hang", 1, "content"), file("draw", 1, "content")]), dir("Docs", empty, [list: file("read!", 1, "content")])], empty) c = dir("TS", [list: dir("Text", empty, [list: file("part1", 1, "content"), file("part2", 1, "content"), file("part3", 1, "content")]), dir("Libs", [list: dir("Code", empty, [list: file("hang", 1, "content"), file("draw", 2, "content")]), dir("Docs", empty, [list: file("read!", 1, "content")])], empty)], [list: file("read!", 1, "content")]) d = dir("TS", [list: dir("Text", empty, [list: file("part1", 99, "content"), file("part2", 52, "content"), file("part3", 17, "content")]), dir("Libs", [list: dir("Code", empty, [list: file("hang", 8, "content"), file("draw", 2, "content")]), dir("Docs", empty, [list: file("read!", 19, "content")])], empty)], [list: file("read!", 10, "content")]) e = dir("TS", empty, [list: file("", 1, "")]) a.size() is 3 b.size() is 8 c.size() is 19 d.size() is 218 e.size() is 2 end

This is all in the Pyret documentation, and I looked at the Pyret source on github to see how to chain multiple methods:

data Dir: | dir(name :: String, ds :: List<Dir>, fs :: List<File>) with: method size(self :: Dir) -> Number: doc: "Returns the size of all sub directories" size(self) end, method path(self :: Dir, s :: String) -> List<Path>: doc: "Takes a Dir, and a search string, returns the path" path(self, s) end end

The path method there's numerous ways you could architect this to be better. Advice: I do these exercises on a phone whenever I have spare time (commuting, lunch break, randomly get an idea somewhere) and often repeated the same annoying mistake using [list: ] instead of 'empty' which seems to break everything in Pyret test cases.

## Assignment 7: Updater

Reading the assignment for updater, the Cursor datatype can also contain another cursor which points to the node above:

data Cursor<A>: | mt-cursor | cursor(above :: Cursor<A>, index :: List<Number>, point :: Tree<A>) end

The index represents the pointed to cursor's position in the child list, ie: [list: 0]. This means you can fold through all the cursor.above indexes to make a string of .get() or .set()::

import lists as L data Tree<A>: | mt | node(value :: A, children :: List<Tree>) end data Cursor<A>: | mt-cursor | cursor(above :: Cursor<A>, index :: List<Number>, point :: Tree<A>) end # test tree: # 1 # / \ # 2 5 # / \ \ # 4 3 6 # \ \ # 3.5 7 test-tree = node(1, [list: node(2, [list: node(4, empty), node(3, [list: node(3.5, empty)])]), node(5, [list: node(6, [list: node(7, empty)])])]) # left side test-tree.children.get(0) # right side test-tree.children.get(1) # get node 3.5 test-tree.children.get(0).children.get(1).children.get(0) # produces a list of indices from root to cur.point ex: [list: 0,1,0] fun get-index(cur :: Cursor): L.foldl(lam(_, x): get-index(cur.above) + [list: x] end, empty, cur.index) end # produces tree.children.get(n).tree.children.get(n).. fun get-nodes(tree, l): L.fold(lam(t, x): t.children.get(x) end, tree, l) end check "map n produces [list: 0, 1]": get-nodes(test-tree, map_n(lam(n, _): n end, 0, test-tree.children)) is node(3, [list: node(3.5, empty)]) end

Check the top node, if a match then cursor(mt, empty, entire-tree) otherwise send the children list of the root node into a helper function that keeps track of the parent node. When pred(f.value) is true, find the position in the parent list of your cursor point, extract this number as an index, build the cursor with:

cursor(find-cursor(entire-tree, lam(x): x == parent-node.value end)), index-of-point-node, point-node)

The index can also just be a number you test to see if it's 0, meaning trying to move left should raise an error as per assignment. The depth-first search requirements you'll have to figure out a scheme to return the leftmost or topmost node for multiple matches you can get-index all the matching cursors if you want or design find-cursor to only search as per requirements. This program has multiple different ways you can implement I came up with a few before trying to figure out the most efficient way.

get-node-val returns type option (this is also in PAPL under Chapter 12/Sets) an option will wrap the response in some(v) or read the documentation for cases on (Option) with some(a) => syntax. Including an underscore in cases means you don't plan on using that field

# this returns some(v) which you extract with some.value fun get-node-val<A>(cur :: Cursor<A>) -> Option<A>: cases(Cursor) cur: | cursor(_,_, point) => some(point.value) | mt-cursor => none end end

## Lecture 11 Streams

Watching lecture titled Mon/10/01/18. In the examples of ones, everytime you invoke 'ones' you get lzlink(1, <thunk>) which is one, plus a way to get more ones if called again. We're asked to write our own map, you could also try it with cases:

fun maps<A, B>(f :: (A -> B), s :: Lz<A>) -> Lz<B>: cases (Lz<A>) s: | lzlink(e, r) => lzlink(f(e), {(): maps(f, rst(s))}) end end fun maps2<A, B, C>(f :: (A, B -> C), s1 :: Lz<A>, s2 :: Lz<B>) -> Lz<C>: cases (Lz<A>) s1: | lzlink(e, r) => cases (Lz<B>) s2: | lzlink(ee, rr) => lzlink(f(e, ee), {(): maps2(f, rst(s1), rst(s2))}) end end end >>s2 = maps(lam(x): x - 1 end, nats) >>take(s2, 5) [list: -1, 0, 1, 2, 3] >>s3 = maps2(lam(x, y): x + y end, nats, nats) >>take(s3, 5) [list: 0, 2, 4, 6, 8]

The type variables annotation he used maps<A,B,C>(f :: (A -> B),..)-> Stream<C>: is in the streams chapter of the text and in Chapter 27.

### PAPL Chapter 16 Functions as Data

The chapter on streams will explain some calculus notation converting it to a pyret function. The symbol \(\varepsilon\) is epsilon which is a small positive quantity, and "functions of arity one" means one parameter of input. There's a for loop used in the check block which is an iterator. We learn more about the ( ->) notation, meaning no arguments. The fibonacci example, change the starting values around so you can see what's going on. You first lz-link 0 and 1 then those values are mapped over, producing a new value, which gets mapped over again etc.

### Lab: Streams

The streams lab is here, mostly adapted from the chapter we just read in PAPL. We already wrote map and map2 in the lecture/book, try to write your own filter/fold for streams:

fun lz-fold<A, B>(f :: (A, B -> A), base :: A, s :: Stream<B>) -> Stream<A>: lz-link(f(base, lz-first(s)), lam(): lz-fold(f, f(base, lz-first(s)), lz-rest(s)) end) end fun lz-filter<A, B>(f :: (A -> Boolean), s :: Stream<B>) -> Stream<A>: if f(lz-first(s)) == true: lz-link(lz-first(s), lam(): lz-filter(f, lz-rest(s)) end) else: lz-filter(f, lz-rest(s)) end end >> take(10, lz-filter(lam(x): if x > 10: true else: false end end, nats)) [list: 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] >> take(10, lz-fold(lam(x, y): x + y end, 0, nats)) [list: 0, 1, 3, 6, 10, 15, 21, 28, 36, 45]

## Assignment 8: Continued Fractions

You don't need the template for the assignment only the following:

# CSCI0190 (Fall 2018) provide * provide-types * data Stream<T>: | lz-link(first :: T, rest :: (-> Stream<T>)) end # These are in PAPL chapter on streams: fun lz-first<T>(s :: Stream<T>) -> T: s.first end fun lz-rest<T>(s :: Stream<T>) -> T: s.rest() end # Something for an mt-stream rec mt-stream = lz-link(none, {(): mt-stream}) ## Part 1: Streams # cf-phi :: Stream<Number> = # cf-e :: Stream<Number> = fun take<A>(s :: Stream<A>, n :: Number) -> List<A>: ... end fun repeating-stream(numbers :: List<Number>) -> Stream<Number>: doc: "" ... end fun threshold(approximations :: Stream<Number>, thresh :: Number)-> Number: doc: "" ... end fun fraction-stream(coefficients :: Stream<Number>) -> Stream<Number>: doc: "" ... end ## Part 2: Options and Terminating Streams # cf-phi-opt :: Stream<Option<Number>> = # cf-e-opt :: Stream<Option<Number>> = # cf-pi-opt :: Stream<Option<Number>> = fun terminating-stream(numbers :: List<Number>) -> Stream<Option<Number>>: doc: "" ... end fun repeating-stream-opt(numbers :: List<Number>) -> Stream<Option<Number>>: doc: "" ... end fun threshold-opt(approximations :: Stream<Option<Number>>, thresh :: Number) -> Number: doc: "" ... end fun fraction-stream-opt(coefficients :: Stream<Option<Number>>) -> Stream<Option<Number>>: doc: "" ... end

You only have to use repeating-stream for phi and e, though I wrote a decimal to continued fraction algorithm I got from from here on page 13 and my own inefficient e generator anyway. When you write your repeating-stream function for e it has to be property-based tested since it's a stream you can only take x amount of values at a time.

fun c-frac(n :: Number) -> Stream<Option<Number>>: doc: "Consumes decimal, returns some(fraction integral) stream" integral = num-truncate(n) decimal = n - integral ask: | decimal > 0 then: new-decimal = 1 / decimal lz-link(some(integral), {(): c-frac(new-decimal)}) | otherwise: lz-link(some(integral), {(): mt-stream}) end where: cf-e :: Stream<Number> = c-frac(generate-e(30)) cf-phi :: Stream<Number> = c-frac((1 + num-sqrt(5)) / 2) take(cf-e, 6) is [list: some(2), some(1), some(2), some(1), some(1), some(4)] take(cf-phi, 3) is take(repeating-stream([list: some(1)]), 3) take(c-frac(2.875), 4) is [list: some(2), some(1), some(7), none] take(c-frac(PI), 5) is [list: some(3), some(7), some(15), some(1), some(292)] end # see wikipedia it is the sum of 1/k! where k starts at 0 and keeps going # 1/0! + 1/1 + 1/(1*2) + 1/(1*2*3) ... where 1/0! is 1. fun generate-e(n :: Number) -> Number: doc:"Generates e (inefficiently) to n decimal places" fun e-helper(current-count): factorial = take(nats-from(1), current-count) .foldr(lam(acc, x): x * acc end, 1) lz-link(1 / factorial, {(): e-helper(current-count + 1)}) end take(e-helper(1), n).foldl(lam(acc,x): acc + x end, 1) where: epsilon = 0.0000001 num-abs((generate-e(10) - 2.7182818)) < epsilon is true wikipedia-value = 2.71828182845904523536028747135266249775724709369995 silly-epsilon = 0.00000000000000000000000000000000000000001 num-abs(generate-e(60) - wikipedia-value) < silly-epsilon is true end check "property-based-test-e": # Property-based testing # Any frac stream of e, if integral > 1 it should be even: # ie [2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1...] fun e-test(e-value, size-of-approx): take(c-frac(e-value), size-of-approx) .map(lam(x): if x.value > 1: num-modulo(x.value, 2) == 0 else: 0 end end) .filter(lam(y): not(y == 0) end) .foldl(lam(z, acc): z and acc end, true) end e-test(generate-e(10), 10) is true e-test(generate-e(20), 20) is true e-test(generate-e(30), 30) is true # This is false because we need a bigger e generated e-test(generate-e(20), 30) is false end

The ask check decimal > 0 was to avoid division by zero, and to terminate finite rational streams with none. PI is a Pyret built-in constant. Everything you need for this assignment is in the book like the nats-from() function I used.

The goal of this assignment is to produce a program that can make refined approximations:

# First attempt without Options fun c-approx(c-list :: List<Number>) -> Number: doc: "Consumes list of continued fractions, returns rational approximation" cases (List) c-list.rest | empty => c-list.first | link(f, r) => (1 / c-approx(c-list.rest)) + c-list.first end where: # phi test of approximations cf-phi :: Stream<Number> = c-frac((1 + num-sqrt(5)) / 2) c-approx(take(cf-phi, 3)) is 3/2 c-approx(take(cf-phi, 4)) is 5/3 c-approx(take(cf-phi, 5)) is 8/5 c-approx(take(cf-phi, 6)) is 13/8 # PI test of approximations c-approx(take(c-frac(PI), 1)) is 3 c-approx(take(c-frac(PI), 2)) is 22/7 c-approx(take(c-frac(PI), 3)) is 333/106 c-approx(take(c-frac(PI), 4)) is 355/113 end

The algorithm for this is just hand stepping the examples in the assignment writeup:

- c-approx([list: 3, 7, 15])
- (1 / c-approx([list: 7, 15]) + 3
- (1 / (1 / c-approx([list: 15]) + 7) + 3
- (1 / ((1 / 15) + 7)) + 3
- (1 / (106/15)) + 3
- (15/106) + 3
- 333/106

There is many other stream functions left for you to write like fraction-stream which takes a stream as a parameter, so you can use take() inside the body of fraction-stream to keep grabbing more input:

# you can use a map over a stream here too fun fraction-stream(n :: Stream<Number>) -> Stream<Number>: doc: "Consumes stream of continued fraction integrals, returns stream of rationals" fun helper(integrals, take-ctr): next-approx = take(integrals, take-ctr) lz-link(c-approx(next-approx), {(): helper(n, take-ctr + 1)}) end helper(n, 1) where: take(fraction-stream(repeating-stream([list: 1])), 3) is [list: 1, 2, 3/2] end

The assignment has you redoing everything for option types, meaning doing cases on Options to safeguard against division by zero and many other problems the writeup notes. The big-O complexity assignment asks what are the meaningful parameters for the analysis, this doesn't mean the parameters of the functions but the statistics definition of parameters like what is the worst case, ie: no threshold is found for your approximations since we are dealing with streams and not n-sized lists anymore.

### Myths about floating-point arithmetic

Check out myths about floating-point arithmetic from MIT's 2020 class of numerical analysis. A computer represents floating point as a set of rational numbers mulitplied by powers of two (binary) or powers of 10 in decimal floating point which exist as libraries and hardware features. You can watch a lecture here how fractional binary numbers work in C, and a visual representation how the definition of IEEE floating point standard works (the same lecture exists on youtube if you search for 15-213).

## Lecture 12 Model View Controllers

This is lecture Wed10/3/18. You can't see the entire whiteboard in the beginning but it doesn't matter, this is a lecture about seperation of concerns when building large systems, how to embody the essence of a system into a model that is common to all parts of the system.

## Lecture 13 Differential Systems

Lecture Fri10/05/18. Using same reactor model to make a point about differential systems. Interesting examples about PID controllers/reactive systems, event loops, checkpoints of the state of the system.

## Assignment 9: JoinLists

### Parallel programming talk

There is a talk for this assignment by Guy L. Steele here from 2009 under *Supplementary Resources* you can watch or type that DOI into sci-hub to read the paper, but it seems this assignment was derived from this paper. The talk is way beyond our current abilities filled with category theory jargon but we can still follow most of it.

- Big message
- Effective parallelism uses trees
- User-defined associative combining operators are good
- MapReduce is good (we build MapReduce ourselves in a future assignment)
- 'Accumulator' paradigm is obsolete, use divide-and-conquer
- The last 50 years of programming tricks should be thrown out

Locality is mentioned @11:45, it's explained here at 44:07, a program is likely to use data or instructions near memory locations it has recently used, like referencing array memory locations in succession (stride-1 pattern).

In this Guy L Steele lecture, *cons* is a Lisp linking operation, similar to link(first, rest), *car* means first, *cdr* means rest. His split function takes a lamdba/functional as a parameter like map or filter does. Whenever you hear the term 'catamorphism' (category theory) in this talk, think a fold function on lists but it's a more abstract term for something that takes many values and reduces them to a single value, like MapReduce IIRC. A full definition is in the book *Practical Foundations for Programming Languages* by Robert Harper in chapter 15 which we should read after CS19. There's some slides using Fortress lang, the syntax is similar to Pyret.

Example reductionFilter():

Fortress: reductionFilter[E](p: E -> Boolean, xs:List[E]):List[E] = ... body end Pyret: fun reductionFilter<A>(f :: (A -> Boolean), xs :: List<A>)-> List<A>: ... body end

These are only used for examples for the greater talk subjects, such as @34:29 notice the unshuffle algorithm is similar to our joinLists assignment datatype containing a number. He chopped up the data and mapped it into a richer dataspace, did all the computations there then projected it back.

Associativity of these operators means if you run (r, 2) concat (5) then it's the same as (r) concat (2,5). A 'monoid' is a term for associative and identity algebras that form the monoid laws, you have some type<A>, an associative operator like concat or + that combines two of them into a single type<A>, and an identity value. In math an identity for natural numbers is n times 1, since multiplying anything by one gives you back that same number. In Pyret every list is linked to an empty list, and if you take [list: 1,2] and link it to empty, you get back [list: 1, 2] another identity.

He quickly goes through a huge amount of slides about string splitting algorithms, but the summary we are interested in is at 51:00 and at 53:49.

- sums, loops and list constructors are abstractly the same thing:
- generate a bunch of singletons
- map over them applying some function
- combine the results ie: generate and reduce

If you combine the results w/associative operators then you can take advantage of parallelism. The rest of the talk brings up some interesting ideas since function composition is associative: f(g(x)) = g(f(x)) you can create thousands of associative functions that transform your data in steps and with parallelism the order in how this is done doesn't matter thanks to associativity. He talks about how arrays are unsuitable for most of this since they rely on addition hardware, whereas trees are much more efficient. Any kind of accumulator methods like foldl or foldr are to be avoided if you are doing parallel algorithms.

### Assignment

From the assignment writeup, use the template where join-list(), JoinList.join() and split() are already libraries there for us to use, and some other testing functions. The writeup wants us to use split() for everything if possible. From Guy L Steele's talk we want to generate a bunch of singletons and throw them at functions, and avoid any kind of accumulators if we can.

Make some test JoinLists to play around with:

size-1 = one(1) size-2 = join-list(one(1), one(2), 2) size-3 = join-list(join-list(one(3), one(2), 2), one(1), 2) size-4 = join-list(one("a"), join-list(join-list(one("b"), one("c"), 2), one("d"), 2), 2) # x gives the first element # y gives the rest of the JoinList split(size-3, lam(x, y): y end) >>> one(1) size-1.join(one("another")) >>> join-list(one(1), one("another"), 2) # identity operator empty-join-list.join(one("test")) >>> one("test")

Here's some of what I tried as an example of how to write parallel style:

fun j-first<A>(jl :: JoinList<A>%(is-non-empty-jl)) -> A: doc:"Returns the first element of a non-empty list" cases(JoinList) jl: | one(elt) => elt | join-list(_,_,_) => split(jl, lam(x, _): j-first(x) end) end where: split(size-2, lam(x, _): j-first(x) end) is 1 split(size-3, lam(x, _): j-first(x) end) is 3 end # Uses the identity operator of empty-join-list fun j-rest<A>(jl :: JoinList<A>%(is-non-empty-jl)) -> JoinList<A>: doc:"Returns a join list containing all elements but the first of a non-empty list" cases(JoinList) jl: | one(elt) => empty-join-list | join-list(_,_,_) => split(jl, lam(x, y): j-rest(x).join(y) end) end where: j-rest(size-2) is one(2) j-rest(size-3) is join-list(one(2), one(1), 2) j-rest(size-4) is join-list(join-list(one("b"), one("c"), 2), one("d"), 2) end

For j-nth, the logic was return the first element automatically if it's 0, then fall into the join-list case. If the first element's length is smaller or equal than the 0-based indexed n that's given as an argument to j-nth, what we're looking for isn't there so skip the first element (which could be an entire embedded JoinList) and call j-nth again with n adjusted to subtract x length. This way there is no centralized accumulator/counter and you could make 20 of these programs run concurrently to split up the work as Guy L Steele recommended. Try hand stepping this:

fun j-nth<A>(jl :: JoinList<A>%(is-non-empty-jl), n :: Number) -> A: doc:"Returns the nth element (0-based index) of a n+1 JoinList" cases(JoinList) jl: | one(elt) => elt | join-list(_,_,_) => split(jl, lam(x, y): x-length = j-length(x) if (x-length <= n): j-nth(y, n - x-length) else: j-nth(x, n) end end) end end check: size-4 = join-list(one("a"), join-list(join-list(one("b"), one("c"), 2), one("d"), 2), 2) j-nth(size-4, 0) is "a" j-nth(size-4, 2) is "c" end

The rest of the assignment is typical exercises we've already done before like implementing map/filter but reduce instead of fold this time. The writeup explains what reduce is, if you have a list [list: 1, 2, 3] then reduce x - y would be 1 - 2 - 3.

The analysis writeup asks 'what property should the first argument to j-reduce demonstrate' and Guy L. Steele already gave us the answer: be associative if you're dealing with manipulating a bunch of singletons and the split function is non-deterministic it can return a different order of join-lists which you will run into if you test j-reduce with '-' operator.

## Assignment 10: Tweesearch

### Problem 1

This assignment isn't in the 2018 semester but I decided to do all the available assignments from every semester. This is here to mimic IRL software development, you get a set of requirements and they almost always change as new features are requested.

We have to:

- split the content of a tweet into a list of words
- feed it and a list of previous tweets into overlap from DocDiff
- return a sorted list of type List<Tweet> with the highest to lowest overlap

Template:

data Tweet: | tweet(author :: String, content :: String) end fun search(t :: Tweet, past :: List<Tweet>, threshold :: Number) -> List<Tweet>: ... end

If you saved DocDiff assignment, open it back up and click on 'publish', you can get a link to import it as a library to use your own overlap function though my import didn't work for some reason and I had to cut + paste it (and rewrite most of it from scratch as I forgot where I saved mine, oops).

Make some data:

test-tweet = tweet("", "This is a test") past-tweets = [list: tweet("", "Random tweet"), tweet("", "Another one"), tweet("", "This is a test")]

Search function, we want to map over the list of past-tweets, send each one off to a function that converts their content to a word list, input into overlap with the tweet we are searching for, then return the overlap number but we also want to save the original tweet. Why not make another datatype:

data Tweet-Stats: | tweet-stats(overlap :: Number, tweet :: Tweet) end # search tweet is also content "test" a = tweet-stats(overlap([list: "test"], [list: "test"]), tweet("", "test")) >>a tweet-stats(1, tweet( "", "test")) >>a.overlap 1 >>a.tweet tweet("", "test")

Built-ins like (string-split-all(string, " ")) can split on spaces and convert tweet content into a word list and the results filtered by threshold and sorted with .sort-by() or use insertion sort we already wrote from the first few lectures.

### Problem 2

We have to watch lecture 14 (DAGs, identical) and some of 15, and/or read the book chapters on DAGs.

## Lecture 14 Identical

Lecture Wed10/10/18. A great lecture about memory aliasing, introduces directed acyclic graphs, acyclic meaning it doesn't cycle and has one direction. The reading uses memory location examples to make this concept of more clear

### Git DAG

This explanation of git object storage using a DAG will help us understand Tweesearch assignment, problem 2. This datastructure is ascending up. Git starts with a blob which is often a file, like the file you are reading right now software.html in the path of this link is a blob and you can click on 'raw' to just get the blob contents.

The representation ascends up to the git directory branches. Everything depends on each other so we can't have an empty node, if empty it gets garbage collected by git. When you create a new commit, your DAG ascends upwards again.

### Tweesearch Problem 2

A new tweet comes in, and to calculate the new relevance, we again go through the list of old tweets, with the formula (0.25 x overlap(parent-tweet, new-tweet)) + (0.75 overlap(old-tweet, new-tweet)). Our tweet datatype is now:

data Tweet: | tweet(author :: String, content :: String, parent :: Option<Tweet>) end # some test data parent = tweet("", "test content", none) child = tweet("", "child content", some(parent))

TODO

## Assignment 11: MapReduce

TODO

## Lecture 15 DAG size, Monads

Lecture Fri10/12/18. They had some homework/quiz to write examples to check the size of a DAG that we don't have access to, however you can read chapter 20 *Sharing and Equality* from PAPL and follow along with the examples. If a branch is the same construction (recall identical from previous lectures/the book) then you don't count it's size twice.

check "size of A node": A = node(1, leaf, leaf) size(node(2, A, A)) is 2 size(node(1, A, A)) is 2

There's only 2 distinct nodes in each, node A and another wrapped around it. Check blocks can have strings, so when you run a test you know which one failed. The rest of this lecture is writing the DAG size function with the class because you have to carry information on which nodes you've already seen. We discover this design pattern of carrying information is called monadic programming.

The end of the lecture he walks through the code of a pyret library he wrote explaining the types and syntax, you can use this to look up the source yourself when doing assignments and read the code.

## Lecture 16 Graphs

Lecture Mon10/15/18 a casual lecture introducing graphs and describing their properties. Our assignment is to read the chapter on graphs up to measuring complexity.