As a C programmer I was told that I should use as little memory as possible. Unfortunately, that advise didn't come along with the fact that I should make a trade-off between software complexity, computing speed and memory usage: more malloc()s means more instructions to be executed, and therefore, the computing speed suffers and the software complexity increases (e.g., how about if malloc() fails?), and vice versa. I was aware about the trade-off but had not got the paradigm shift. Now I do get the shift.
Two days ago I was in the final discussion of the Service Publishing AP project. At the end, I told everyone that in Java one cannot peek into the receiving buffer of a UDP socket so that s/he cannot allocate ByteBuffer of the right size to retrieve the next pending UDP packet while C can do that with socket option MSG_PEEK. One of the supervisors then asked me why I didn't simply allocate the largest static buffer possible to contain the largest UDP message. Having the old paradigm that memory is scarce, I answered him that a UDP packet is large. He then again asked me how large it is. Well, after counting, I came to realize that the largest UDP message is just 65,536 bytes that is still so small compared to the memory available to an application in an Android gadget or in an AP that currently is about 16MB! After that, I came to my sense and shifted my paradigm.
To conclude, when developing an application or a program, the machine on which the software is expected to run must be understood well so that the right trade-off between software complexity, computing speed and memory usage can be made.