Snapshot: This week I completed Computer Science 101: Master the Theory Behind Programming. Here's my notes! As a disclaimer, I don't claim to be an expert that can teach you these things, which is why I make sure to link plenty of articles from people who are experts. But using the Feynman Technique, I'll fancy myself a teacher trying to explain these topics to a student. Your patience is appreciated. Now in the grand scheme of things, this is furiously brief. Check out the course in the link if you're so inclined, or pick up a book (Computational Fairy Tales, The CS Detective, Computer Science Made Easyjust to name a few), or find some free YouTube vids to go further in depth! Here's our Table O' Contents: I. The Binary System: How Do It Know? II. Time Complexity: What We Talk About When We Talk About Runtime III. Or What's An Algorithm For? IV. Math: A Way of Thinking About...Everything V. Who's Afraid of the BigO Notation? VI. The ABCs of How To Begin Thinking Like A Programmer VII. Congratulations, You Made It To The Summary! But for now, let's hit the ground running. I. The Binary System: How Do It Know? This is how computers think: This is just a short blurb about the binary number system. It's not essential to know this insideout, but for the purposes of understanding the logic of computers, it's pretty helpful. "At the fundamental level, the computer only knows zero and one." Let's break that down. So, our computer runs onwhat? Electricity. Go over to your nearest light switch. Or just look at it, that's fine. We switch it on. To a computer, that's the equivalent of a "1." We switch it off. To a computer, that's the equivalent of a "0." So, way down deep, underneath all those fancy user interface windows, all that complicated looking code, all the complicated code underneath that code, way way down deep, the computer is really just looking for a series of on and off switches. It's just looking at what's a zero and what's a one. And that's basically all you need to know here. When we're programming, we're just using humanmade languages that work with these onoff switches (also known as transistors). Or you can think of it as passing along the torch to a team of interpreters. Either way, we don't need to concern ourselves with the nittygritty, but it's good to know a little of what's going on underneath the hood. So now it's time to switch back to a human mode of thinking. Even though it might not sound very human at the first read. In fact, it's going to sound wonderfully sciencefictiony. We're going to talk about time complexity. II. Time Complexity: What We Talk About When We Talk About Runtime What is time complexity? Well, first of all, what is runtime? We could break down a software program into two stages, just to simplify things: it compiles and it runs. First, our program compiles. It's loading all the systems in place. You'll especially see this on complex computer programs, such as highly modular computer games, where you not only implement the "vanilla" systems but can also add fancreated modifications to the queue (Skyrim and the Sims come to mind). In the second half, our program runs. Our program's runtime is simply what happens between when a program opens and when it closes. But we've all encountered long load screens and lag before. The underlying code of a program needs to be organized in such a way that it's not tasking the computer to run too many things at once and take longer than it needs to. There's a way computer scientists can talk to each other and analyze each other's programs; how they can figure out how long and complex a program really is; and how to make it more efficient. It's called time complexity. And time complexity is actually about algorithms. An algorithm is a set of rules a computer follows in order to make calculations and solve problems. When a program is running, it is essentially running through multiple algorithms that control the logic of the program. Time complexity "is a standard way of analyzing and comparing different algorithms. It allows computer scientists to figure out how to improve what's already out there" (Kurt Anderson's words, not mine). So, when we talk about runtime, we talk about time complexity, and when we talk about time complexity, we're using the standard of computer scientists around the globe: an analyzing algorithm known as Big O Notation. But first, let's open up algorithms a little bit more. After all, they're the bread and butter of computer science (and thus the bread and butter of game development and all other related fields). III. Or What's An Algorithm For? To reiterate, an algorithm is a set of rules a computer follows in order to make calculations and solve problems. It's like a recipe, giving a computer a series of steps to follow to complete a computational task. Wikipedia makes it even more succinct: logic + control = algorithm And guess what? We encounter algorithms all day long. You know plenty of them already. Here's some examples you picked up in school:
Here's some examples from everyday life:
Here's some examples of technology using algorithms in the background:
Here's some examples from video games:
And here's the one we're actually going to focus on. We've mentioned it before: BigO Notation. But before we talked about BigO Notation, let's talk about its predecessor, nNotation. Using caterpillars. nNotation looks like this: n(n) and reads like this: n is a function of n and if you translated that further, that means: output is a function of input Here's an illustration of that, using something less abstract than "n": When a caterpillar enters a cocoon state, it follows the logic of a caterpillar (in this case, it's "cocoon function"), and the logical output is a moth. However, this moth isn’t a completely new, unrelated factor. The moth is a natural part and extension of the caterpillar. And it’s controlled. Remember, algorithms = logic + control. Even though caterpillars munch on leaves all day long, no amount of logic will turn it into a leaf. In the end, the output must be a function of the caterpillar itself. You could go even further and say this particular caterpillar couldn’t even become a butterfly if it wanted to—because this is a specific caterpillar that will turn into a specific moth. Another caterpillar might become the Monarch butterfly, but that would be a different function. Different caterpillar, different function. Even though we’re dealing with abstract things like ‘n’ or big numbers or whathaveyou, this is the basic logic. So, caterpillar is a function of caterpillar. So, n is a function of n. Here's Kurt Anderson's bold note: "nnotation is NOT an analysis of how long an algorithm will take to run, but an analysis of how the algorithm will scale with more and more input data." And guess what? We're still not moving on to BigO Notation. Nope, in the spirit of Mr. Miyagi, we're going to build up the suspensebut it's all much needed stuff, don't worry. Continue not to worry, even as I unceremoniously drop our next topic: logarithmic functions. IV. Math: A Way of Thinking About...Everything Yes, computer science is built on math. More accurately, it takes mathematical concepts to explain the reasoning behind computers and their programs. I'll say this: learning this kind of math has been much more fun and a lot less airy than high school math. We're not talking about train time tables or abstract text book questionsthis has realworld applications, and, even if it takes a couple passes, I found it...amusing. In fact, when you decide to study math on its own, for its own sake, for your sake (and not your academic system's requirements), it's a whole new world. No less difficult, but now it's a welcome challenge rather than a chore. That's been my experience. Math, and computer science, are really about thinking. Computational reasoning wasn't invented by computers; it was invented by other human beings. Computational reasoning is logical reasoning. And we often don't give math credit for its sexier attributes: its explained hypothetical futures, ended wars, illustrated the shape of space, helped construct magnificent cathedrals, enabled rockets, and organized the entire sum of human knowledge. And today we only have to cover logarithmic functions. I'll link more indepth explanations here and here (this last link goes ahead and explain Big O Notation as well), but I'll quickly try to explain what a logarithmic function is and why it's important by illustrating its opposite: Exponential Functions. So, if you've graduated high school, you have at least a cursory knowledge of what exponential is. 4^n can be read as 4 to the power of n. Which means if n = 2, then 4^2 means 4*4, and therefore 4^2 = 16. And while not the same equation, just to get a picture, on a graph of x = n^3, it looks like this: That's an exponential function. We're increasing exponentially. As stated before, an exponential function is the opposite of a logarithmic function. Let's do the logarithmic version of the exponential function, x = 4^2. It would look like this: log₄16 = ? log(4) of 16 = ? We can convert this to the exponential: 4^? = 16 4 to the power of what is equal to 16? Well, 4 to the power of 2! So another way to refer to logarithmic functions is to call them inverse functions. They are the inverse of exponential functions. Just look at the graphed comparisons between these two types: They are inverse. They are opposites. V. Who's Afraid of the BigO Notation? Now, the reason why nNotation and logarithmic functions matter is because they lay the groundwork for our first important algorithm: BigO Notation. Q. What does it do? I like Jeremy Kubica's definition from his book, Computational Fairytales: "BigO notation is a method of specifying the worstcase performance of an algorithm as the size of the problem grows. BigO notation is important for understanding how algorithms scale and for comparing the relative performance of two algorithms." Q. Why is it called "BigO Notation", and how is it related to nNotation? Look at the chart below: BigO Notation is referring to the second notation listed there, the bigger O. (Yes, we can ignore the Greek letters for now). This chart also notes the relationship that BigO has to nNotation. The nNotation is included right there in the parentheses. BigO is a function of N. In this case, it's a function that tells us how many operations this algorithm will need to operate to complete the problem at hand. Q. What does it look like? See below: The above is a screen capture from this website, which is worth overstudying. Scrolling down the page, you're going to see all sorts of terms, like array and heapsort and insertion, that we're going to see over and over again in our study of computer science and game development. But we don't need to eat the whole elephant at once. The last thing we need to think about in regards to BigO Notation is perhaps the most important. Q. Why should I care? We've already established that computer scientists need a way of analyzing an algorithm's efficiency. If we throw, say, 100 elements at it, is it going to take 100 steps for that algorithm to run? Well, if n = n, then yes. But the larger n is, the less desirable that particular algorithm becomes. We've already established that BigO Notation is a way of measuring 'n.' N, in this case, is just our standin variable for however many steps a particular algorithm will go through. N steps for N elements. Written in BigOese, that's O(n). There are two ways to look at BigO notation: from a mathematical POV and from a computer science POV. We've already tapped the surface of what the mathematical background is (with sizeable gaps, I know, but check out this video here for a better take on the subject), but all we need to know is why computer scientists care about BigO Notation: We use it to judge the worst possible outcome so we can devise the best possible solution. If it's good enough for the pros, it's good enough for us. VI. The ABCs of How To Begin Thinking Like A Programmer Andy Harris gave a great speech a few years ago about how to think like a programmer. Or, in his case, how to teach a beginner about programming. The speech is an hour long, so here's the breakdown: A. Coding is not about the programming language. B. You can learn one or two dozen languages, but the same basic concepts apply across the board. C. It's that way of thinking about and combining these concepts to solve problems that constitutes coding. D. According to Harris, there are only about eight concepts in all of programming: E. Declaring a variable with appropriate name, type, and value. F. This is actually an algorithm, too. Remember, an algorithm is a recipe. Here's what the above is saying to the computer: "Create a variable called Name, of type Type, that starts with the value InitialValue." G. Output. Telling the user stuff. print ("Hello World") and the computer will print "Hello World" to the console. H. Input. Asking the user for input before you give them output. I. Convert to integer. Take something that's not an integer (A.K.A. any number positive, negative, or zero) and make it into an integer. Perhaps taking a float number (3.14) and converting it. J. For loop. (This and while loop is best saved for an indepth discussion at a later date). K. While loop. (Ditto). L. Debugging. A.K.A. Why is this broken? M. So the #1 skill we are after is so obvious it hurts: problem solving. N. In Harris' own words, "The secret isn't code, it's algorithms and data." O. Further, "If you're lost in coding, it probably means you shouldn't be coding yet." (I.E., learn how to think first) P. Comments aren't there to explain the code to other programmers; code is there to explain the comments. Q. We're all beginners. R. Best Practice: Write an algorithm in plain English before you start coding. S. Failure is WONDERFUL. T. Begin debugging now. U. Of course, the best way to debug is to not have bugs. V. "Bad implementation can be Googled, bad algorithms cannot." W. "Don't start with the solution; start by truly understanding the problem." X. "Nothing is messier than game code...you don't write it to be maintained, you write it to be fast." Y. It will all happen in good time. Z. And finally, his recommendation for what to learn after you're comfortable thinking computationally: VII. Congratulations, You Made It To The Summary!
Happy New Year Everyone!
In Excelsis Deo.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. 
AuthorK.W. writes novels, short stories, the occasional ode, game scripts, and (with actual evidence!), this here blog. Archives
April 2020
Categories
All
