When quoting the running time of a data structure operation, we use the following estimates:
The actual execution time will depend on the specific data stored in the data structure. Our first approximation is to only consider the number of elements inside the data structure. Even though data structures with the same size but different data might have slightly different running times, we make an approximation which consists, for each size \(n\), of a range of possible execution times for data structures of size \(n\). We make sure to make this range big enough to capture the variations due to different data in data structures of size \(n\).
Next, we approximate the execution time using a simplified model of computation. The actual execution time will depend on the details of the CPU and memory, the programming language we are using, the optimizations that the compiler carries out, and many other factors. Our approximation (which is a range of possible execution times) will be big enough to capture all the possibilities, no matter what brand of CPU you use or programming language you use.
The way we specify these execution times is big-O notation. For an interactive alternative to learn big-O, I highly suggest codeacadamy. It is in javascript, but the ideas are well explained. The book's description of big-O is in Section 1.3, which is what I mostly followed in class.
Finally, in data structures and algorithms there is a social agreement that when you quote a big-O time for an operation it is the case that the \(n_0\) and \(c\) are of reasonable size. Mathematically, something could be \(O(n)\) but only be below \(n\) once \(n\) is as big as the number of atoms in the universe. Obviously we are never going to have a data structure with the number of elements equal to the number of atoms in the universe, so the fact that the execution time is \(O(n)\) is not very helpful in this case. Therefore, as a community we have decided that if the constants are large you should give some additional information about the constants, since the absence of any qualifiers on a quoted big-O time implicitly means that the constants are of reasonable size. But this is only a social convention, and is specific to data structures and algorithms. Mathematicians also use big-O notation and usually aren't concerned with the size of constants.
I then continued with ArrayStacks, analyzing the running time of the operations.