With such a broad scope, ranging from machine learning to cloud computing, it can become easy for computer science to seem like magic. Misconceptions abound in society regarding the field, its purpose, and its methods. However, with software playing a role in almost every industry on Earth, it is vital that computing be demystified. For those planning on majoring in the field, this is of utmost importance– you need to know what you are getting into before making the next great technological leaps.

Computer science, contrary to popular sentiment, is not solely about programming. Programming is the tool wielded by computer scientists in order to explore and implement their conjectures and theories. For example: if you were studying to become a ship captain, you would not spend the entire time learning about the ship. Instead, you would expect to learn about the tides, steering the ship, and all manner of oceanic peril. The ship is simply the vehicle– and just as there are many different types of ships, there are many different types of programming languages.

Computer scientists use the vessel of programming to navigate the ocean of computing potential. As such, studying computer science is less about the act of programming and more about the underlying theory, mechanisms, and algorithms used in computing. Programming is certainly vital, but in more of a utilitarian manner– it is the main tool, not the main objective.

Delving into the theory behind computer science can be intimidating, but ultimately vital to developing a thorough understanding of the subject. So with that goal in mind, let’s take a look at two mainstays of the discipline– abstraction and algorithms.

Abstraction

Abstraction involves taking a concept and using only its key features to solve a problem. It allows for greater creative freedom, letting computer scientists disregard any unnecessary minutia in their programs. The ultimate goal of abstraction in computer science is to free humans from having to toil over small items and allow for a more big-picture view. Without having to worry about the computer’s internal processes, we can more easily implement high-level ideas.

What does abstraction look like?

Even extremely basic actions in programming involve some level of abstraction. Take, for example, this simple calculation in Java:

int a = 10;
int b = 20;
int c = a + b;
System.out.println(c);

Here, abstraction can be seen on a few levels. First, classifying the variables as integers (the “int” before each variable assignment) is a form of abstraction. We tell the computer that each variable is a member of the Java integer data type, with all the characteristics that entails. Second, the act of assigning values to the names “a,” “b,” and “c” is itself an example of abstraction. We don’t have to remember later on what the value of “a” is; we just remember that we called it “a.” The same holds true for “b.” Later, when we want to find the sum, we simply add the variable names and display the result– the computer takes care of remembering the stored values.

Third, the statement “System.out.println(c)” exhibits another layer of abstraction. “System” is a class in Java; we access this class to print out the result of our calculation, plugging in our variable “c” as input. We don’t need to know everything about the System class, but just how to access it for our purpose. Different languages can also use abstraction in different ways; let’s take a look at this same calculation made in Python.

a = 10
b = 20
c = a + b
print(c)

Here, Python adds another layer of abstraction by not requiring data type declarations. We simply assign the value of 10 or 20 to our variables without needing the “int” declaration as with Java. In addition, we don’t need to know how to access any “System” class– we just use the print() function to print our value.

Taken from this basic premise, abstraction can be a powerful tool to drive cutting-edge technological development. Fields such as artificial intelligence, for instance, are derived heavily from this core idea. High-level programming languages used for quantum computers also take advantage of abstraction.

Algorithms

You’ve probably heard of algorithms in your math classes. They’re essentially methods to solve a given problem. In computer science, algorithms take on a new level of importance, and are ubiquitous throughout almost every computing discipline. Because algorithms are so varied in form and function, we’ll only touch on two categories.

Sorting Algorithms

Sorting algorithms do just that– sort. They take in a list of elements as input, and arrange them in some order. Likely the most common way of sorting when presented with a numerical list is from least to greatest. A vast variety of such algorithms exist, with varied levels of performance. The most basic method, used primarily in teaching computer science, is the bubble sort. It involves stepping through a list or array, comparing consecutive elements. If the prior value is greater than the one after, the two values are swapped; if they are already in order, no swap occurs.

A simple example of a bubble sort algorithm in computer science

A bubble sort is among the most simple of sorting algorithms, only ever used in an educational environment. This is largely due to the fact that sorting large lists with elements in random places can get extremely tedious with the bubble sort. As shown above, it is relatively easy for greater values to move to the top– it is much harder for smaller values to move downward. The greater values can move multiple spaces up in one iteration, while the smaller values can only move one space down the list at a time. If the value “1” were in the last box, many more iterations would have to occur before the ordered list was created. Other, more complicated sorting algorithms attempt to remedy this issue– insertion sort, merge sort, and quick sort are just a few examples.

Search Algorithms

Search algorithms are just as varied, and incredibly broad in their applications. They essentially find and retrieve a desired piece of information. In an ordered numerical list, this could take the form of finding the maximum, minimum, or median. Given this task, there are two prominent methods to find a desired value– linear search, and binary search.

A linear search involves simply checking every single value given until the desired element is found. This, much like the bubble sort for sorting algorithms, is impractical for real-life applications due to its inefficiency. A binary search is much more desirable. It involves picking a data point roughly in the middle of the data set, and checking if the desired element is higher or lower than the point selected. If higher, the computer can ignore the values lower than the point chosen, and vice versa. When the computer finds the desired value, the program stops. In this manner, a binary search can quickly eliminate unnecessary data points to arrive at the correct value. A binary search used as such requires an ordered list, where a sorting algorithm can come in handy.

Fundamentals Are Key

The world of computer science is as vast as it is deep, and starting to learn more can certainly be tricky. However, by grasping some of the basic theory behind the discipline, society can be better equipped to understand the monumental change computer science continues to bring. As computer science continues to change our daily lives, it becomes increasingly important to gain at least a basic knowledge of computing; demystifying computer science is vital in shaping a better, more informed future.