by the stash, for the stash
Latest Reviews & Articles
Latest Topic Replies
I thought it might be fun to have a thread for discussion related to software implementation.
Here is a quick guide I wrote for a friend on the lovely C++ std::vector (note, if you can, use the Boost Library instead):
Everyone loves Bjarne's std::vector! Anyone who seriously believes that a two dimensional matrix is substantially more efficient than a properly managed std:vector is simply incorrect - I know, I've done the performance comparisons (it's hard for me to admit that). Here is a rundown of the basics required for implementing high-memory, expensive algorithms with the std::vector class: //declaration vector<vector<int> > myVector; //do not forget the space, it is to help the compiler distinguish //between templates (generics) and multi-dimensional vectors //initialization myVector.resize(MY_ HEIGHT); //setting the vector to your maximum intended size prevents //expensive automatic resizing from occurring //and reduces overhead immensely for(int i = 0; i < MY_HEIGHT; ++i) myVector[i].resize(MY_WIDTH); //we must initialize each vector's vectors //function prototypes that use vectors void myFunc(vector<vector<int> >& myVector); //you really, really should pass by reference //if you don't, you will recopy the entire std:vector on the heap //doing so is a terrible idea //another fun note - by default, C++ implicitly passes any array by reference //manual deallocation (for champions) //let's clear each of the i vector's vectors for(int i = 0; i < MY_HEIGHT; ++i) myVector[i].clear(); //exit loop //time to clear the i vectors myVector.clear(); //you really don't need to worry about doing this if your std::vector is taken out of scope As another note, a std::vector is a semi-managed data structure, so, while the size of the vector construct goes to the stack, all of the actual elements are stored on the heap for you. It is important to make sure your data is being stored on the heap - the stack does not have enough space (only ~4 Kilobytes) to handle all of the information necessary. Finally, I would seriously consider alternatives to an "int" primitive. A single int can hold 32 bits. If you are using unsigned, this is a maximum value of 2^32. Do you really need to store a value this large? For example, it is clear that: 32*100*100 >> 16*100*100. A program could be using 2 Gigabytes of memory, when it only really needs 1 Gigabyte! For most implementation work, you might find the very cool uint8_t or uint16_t to be optimal. These are, of course, unsigned, but for most algorithms (besides negative weighted traversals), those two types are immensely useful. Sometimes (-2pow31) to (2pow31 - 1) clearly isn't what you need; 0 to 2pow16 is usually right on the money!
The main reasons I am "good" at programming is that I enjoy challenges, I know how to use Google, and I can usually follow the logic easily.
However, I tend to not know much syntax or conventional names, preferring to google them as I need to. Therefore, I don't really know "Bjarne's std::vector" I can probably understand the entire code if I had the time to google the stuff, however.
Although I should go mining for some of the funnier and stranger pieces of code I have written. I know I have some highly inefficient but original Java program for calculating the partitions of an integer (Number of ways of writing a number as a sum of integers).
Bjarne created C++.
I do not believe there to be an efficient way to calculate the p(n). The recursive formulation is very inefficient.
2. My method was horrendus. It made an nxn recursive matrix to calculate p(n). I was just proud of it because I invented it without knowing much about partitions at all.
The well-known recursive formula is really fast though. Summations of pentagonal numbers? Quadratic spacing. Meh.
There is also the closed form, which is hilarious and horrendus. sins and adding 1/24. Sigh.
3. I need to learn more about O(n) and algorithm speeds. Seems interesting, but never learned about it. Just "learned" from context. I know it is the approximate speed of the algorithm for n, but I don't know too much about calculating it.
Last edited by Kurathedog (5/22/12 5:40 PM)
It is amortized n, but I say it is worst case n^2.
You can solve recurrences with term expansion and lots of log / exponent rules.
Order of magnitudes are pretty awesome. It isn't so much speed, but more a bound on operations.
There are worst, amortized, and average cases for big o. Theta is a tight bound and omega a lower bound. These are defined for specific constants that help compare growth rates of operations.