(Originally recorded 2019-04-09)
Recap of Lecture 2.
Basics of functions, parameter passing, modularity in C++.
cf. C++ Core Guidelines: Pass small-sized parameters by value, large parameters by const reference.
Separate compilation: Separate object (.o) files, linked together into executable.
Function declaration made in header file. Function definition in source file.
Make and makefiles.
Like all programming languages, C++ allows you to create functions.
A function in C++ has essentially four components: The function name, the parameters passed to the function, the function return type, and the function body. Together, the function name and parameter types constitute the “signature” of the function. Functions must have a unique signature, not simply a unique name, meaning you can define functions with the same name as long as the types of the parameters are different—a technique known as “function overloading.”
Unlike almost all other languages, C++ allows you to specify how parameters are passed — most languages support just one way, typically “pass by reference.” C++ allows pass by value as well as pass by reference.
It is easiest to see the different modes of parameter passing with an example. Consider a function that squares its argument:
double square(double y) {
y = y * y;
return y;
}
Note that in the body of the function, we square y
and then return the squared value.
Suppose we call this function
double pi = 3.14;
double pi_squared = square(pi);
std::cout << pi << std::endl;
What will be printed for the value of pi in this case? The type of the argument to square
is a double
, meaning when we call square
, the value of pi
is copied to y
. So, even though we change the value of y
in the body of the function, y
holds a copy of the value of pi
, it isn’t pi
itself. Hence, 3.14
is what will be printed; the value of pi
will not be changed.
Now, suppose we define square
in a very slightly different way.
double square(double& y) {
y = y * y;
return y;
}
Can you see the difference? It is only one character. Rather than passing a double
to square
, we are passing a double&
– a reference to a double
. Now if we repeat the above
double pi = 3.14;
double pi_squared = square(pi);
std::cout << pi << std::endl;
What will be printed for pi
? In this case it will be 9.8596
– the square of 3.14. Calling this version of square changes the value of pi
! What has happened is that rather than copying the value of pi
to y
when we invoke square
, the variable y
is a reference to pi
– in some sense it is :c++`pi` – whatever happens to y
also happens to pi
.
Passing references seems kind of dangerous. Why would we want to be able to do that? For two reasons. One, we might want the behavior of our function to be such that it mutates its arguments – these are sometimes called “out” or “in-out” arguments. C++ core guidelines suggest not ever doing this, however. The second reason is more important. When we call a function and pass an argument by value, the value is copied. That is fine when we are passing something small, like a double
. But soon in this course we will be working with large data structures, vectors and matrices that may be megabytes or even gigabytes in size. Copying the data structure in this case incurs a large overhead just for making the function call. It is much more efficient to pass large variables by reference and operate directly on the original data.
There is one more important variation on how parameters are passed.
Suppose we have square
defined as above to take its argument by reference. Suppose we now call it like this:
double square(double &); // declare the prototype for square
double pi_squared = square(3.14); // error!
std::cout << pi << std::endl;
This example will fail to compile. (The compiler will produce an error message along the lines that the candidate function is not viable because it is expecting an l-value.) The problem is that square
takes its argument by reference, meaning that the function is free to change the value inside its body. The function may or may not actually make a change, but what the compiler looks at is the function signature – or its prototype. If you declare that you are passing a parameter by reference, the compiler has to assume you might change the value. The problem with calling square(3.14)
is that 3.14
is a constant. We might try to change its value, but that makes no sense for something that is not a variable.
On the other hand, calling square(3.14)
is something that seems perfectly sensible to be able to do. To allow constants to be passed by reference, C++ allows you to mark a reference as being const
, meaning you promise that your function will not try to change the value of that argument.
We would use const
this way:
double square(const double& y) {
double z = y * y;
return z;
}
Note that we can no longer have the statement y = y * y
– if we mark a variable as const, the compiler will not allow us to change it. But with this definition, we can invoke square
on 3.14
double square(const double &); // The prototype of square now has const reference
double pi_squared = square(3.14); // So we can call it on a constant
std::cout << pi << std::endl;
Using a const reference lets us also pass in something called an “r-value” to a function. An “r-value” is a temporary created by the compiler in response to a compound expression. Consider
double pi = 3.14;
double z = 2.0 * pi;
In this case 2.0 * pi
is an r-value. The program has to take the value of 2 and the value of pi and multiply them together and then assigns the result to z
. The intermediate result has to be stored somewhere – this kind of temporary storage is what we mean by r-value. You can also think of r-values as things that are on the right of an assignment expression. There are also l-values, which you can think of as things on the left of an assignment expression.
But, just as it isn’t meaningful to assign to a constant (to change the value of a constant), it isn’t meaningful to assign to an r-value. What that means for function calls is this.
Suppose we define square
to take its argument by reference. That means we might change the value. Since we can’t change an r-value, we will get a similar compilation error as we did when we tried to pass in a constant.
double square(double &); // declare the prototype for square
double pi = 3.14;
double pi_squared = square(2.0 * pi); // error! 2.0 * pi is an r-value
std::cout << pi << std::endl;
As with calling square
with a constant, it seems perfectly sensible to be able to call square
on an expression like 2.0 * pi
(on an r-value). The const qualifier solves this problem for us as well.
double square(const double &); // Pass by const reference
double pi = 3.14;
double pi_squared = square(2.0 * pi); // We can now call with an r-value
std::cout << pi << std::endl;
The relevant C++ core guidelines for functions and there parameters are:
F.2: A function should perform a single logical operation
F.3: Keep functions short and simple
F.16: For “in” parameters, pass cheaply-copied types by value and others by const reference
F.17: For “in-out” parameters, pass by reference to non-const
F.20: For “out” output values, prefer return values to output parameters
Procedural abstraction lets us collect together well-defined functionality into a single function and then use (and re-use) that function wherever we want to perform the particular operation it represents, simply by invoking the function. We can use the function anywhere we like in our program. In fact, we can use the given function anywhere we like in any program, provided we make it available in the proper way.
Recall that compiling a program is a multistep process. In one step, the source code in a given file is translated into an object file. The function calls we make in that source code can be to functions that are also defined in that file – or the calls can be to functions the we define in other files. In another step of the compilation process, all of the objects comprising our program are linked togehter to make an executable. At this point, the function calls that we made throughout the different files in the program are resolved to the actual definitions of functions in other files. The program will not link if a function call is made but there is no matching function definition.
Let’s consider an example.
Here we have some source code – assume it is contained in a single file.
#include <iostream>
#include <cmath>
double sqrt583(double z) {
double x = 1.0;
for (size_t i = 0; i < 32; ++i) {
double dx = - (x * x - z) / (2.0 * x);
x += dx;
if (abs(dx) < 1.e-9) break;
}
return x;
}
int main() {
std::cout << sqrt583(2.0) << std::endl;
return 0;
}
The file has the definition for sqrt583
; that function is called in main
. We can compile this file into an executable and run it. The function call and the function definition are both in the same file.
Suppose now we want other programs to be able to use sqrt583
. The first thing we need to do is to put the function definition into a file separate from main
. Depending on how we want to organize our source code into files, we might put sqrt583
into its own file, or we might put it into a file with other related functions. The point is that we need to be able to compile sqrt583
separately from any program that might want to use it.
#include <cmath>
double sqrt583(double z) {
double x = 1.0;
for (size_t i = 0; i < 32; ++i) {
double dx = - (x * x - z) / (2.0 * x);
x += dx;
if (abs(dx) < 1.e-9) break;
}
return x;
}
Note that even though sqrt583
is defined in its own file, we invoke it exactly the same way as we did before:
#include <iostream>
int main() {
std::cout << sqrt583(2.0) << std::endl;
return 0;
}
But we need to do one more thing. If we try to compile the above code we will get an error, because the compiler needs to set up to make a call to sqrt583
but it does not know the type of argument that it takes (is it by value, by reference? Is it a float, a double?) and it does not know what the function returns. We can provide this information to the compiler in the form of a function declaration that provides the necessary information to the compiler.
#include <iostream>
double sqrt583(double);
int main() {
std::cout << sqrt583(2.0) << std::endl;
return 0;
}
In a program of any non-trivial size, we will be making many different calls to many different functions. For each of those functions, we must have a declaration in order to be able to properly compile the code making the call. One way of doing that would be to determine which functions we are using, and then for every one of them, add the appropriate declaration. And we have to do that for all of our source files. This quickly becomes as unmanageable as it would be tedious.
There is a convention for providing function declarations that you have already used — header files. What header files provide are the function (and data type) declarations for a set related set of functions that can later be linked to to get their definitions. In the case of the system headers, the functions belong to C++’s standard library and the compiler knows where to get those definitions, we don’t need to tell the compiler. In other cases, we will need to tell the compiler to link to specific libraries (we will see how to do this later in the course).
Consider this example, where we have a set of math functions specialized just for our course
#include <iostream>
double sqrt583(double);
double expt583(double);
double sin583(double);
const double pi = 3.14;
// ... and more
int main() {
std::cout << sqrt583(2.0) << std::endl;
std::cout << expt583(42.0, 2.0) << std::endl;
std::cout << sqrt583(2.0 * pi) << std::endl;
// ... and more
return 0;
}
Rather than manually declaring each of those functions, we collect them into a file amath583.hpp
double sqrt583(double);
double expt583(double);
double sin583(double);
double cos583(double);
double tan583(double);
const double pi = 3.14;
// etc.
To avail ourselves of these declarations, we #include
the file amath583.hpp
#include <iostream>
#include "amath583.hpp"
int main() {
std::cout << sqrt583(2.0) << std::endl;
std::cout << expt583(42.0, 2.0) << std::endl;
std::cout << sqrt583(2.0 * pi) << std::endl;
// ... and more
return 0;
}
Now, the functions that are declared in amath583.hpp
need to be defined somewhere. One option might be in a file amath583.cpp
. To compile the main program above and to access the necessary function definitions, we can compile the files together:
$ c++ main.cpp amath583.cpp
This will produce a.out with all of the functions resolved.
Another option is to compile main.cpp
and amath.cpp
separately into object files (“.o” files). This approach is more scalable, allowing individual files to be compiled, and the object files used (and re-used) rather than compiling all of the source for a program every time it is compiled.
To create an object file, we give the compiler the -c
option. We can also specify the name of the object file using the -o
option, but the default is usually fine — the output file will have the same name as the input file, but with .o
substituted for .cpp
. However, the -o
option is useful for creating an executable with the name we want, rather than a.out
.
$ c++ -c main.cpp # compiles main.cpp -> main.o
$ c++ -c amath583.cpp # compiles amath583.o -> amath583.o
$ c++ main.o amath583.o -o main.exe
The final step here links the two object files to create the executable main.exe
.
In a large software project there will be numerous source code files and numerous header files. If we edit one of the files, we need to recompile the program. We don’t want to (or need to) recompile all of the source code, rather we want to compile just the source code that was affected by the change we made. If we made a change to a particular .cpp
file, we need to compile that file into its .o
and then link to create the executable. If we made a change to a particular header file, we need to compile all of the files that #include
it.
Now, again imagine you are working on a large software project. You might make a change in the code that propagates through many source code files. Recompiling means figuring out which files have changed, and issuing the appropriate commands manually – a repetitive and boring task – but one perfectly suited for automation.
The dependencies among software artifacts in a project, as well as the rules for compiling and linking, can be succinctly captured and used for automating the build process. The dominant tool for this in Unix-like systems has been the make
tool, which uses Makefiles
to represent program dependencies. Properly automating the various steps in a large project can require sophisticated approaches to using make
. We will be exploring make
and Makefiles
as part of our assignments throughout this course.
(Although the use of Makefiles
reduces the burden of manually building software systems, writing Makefiles
is itself a boring and repetitive task. The cmake
tool has emerged as a sort of “meta” make
for generating Makefiles
. Using cmake is beyond the scope of this course, but you are likely to run across it if you build any modern open source project from scratch.)