As it turned out our dbas lost track of a request and luckily somebody noticed today.
I suppose a conference call could be productive if everyone on the call is adding some value. Unfortunately that is seldom the case.
If we simply take ownership of the means of production, value add will not be a problem.
In any case, our DW team is a bunch of bumblers. Yet another last-minute request for something that should have been planned ahead.
The same last-minute request that happened last time.
hey, maybe some link for davew
didn't see it that way yet, but it realy mirrors what i've experienced:
My biggest beef with C++ is that everything it adds to C is designed to make code easier to write, and none of the things it adds are designed to make code easier to read.
I find C++ much harder to read than C. You can usually take a C program, and examine some function in it and right away more or less tell what it’s doing. It’s not typically couched in a mountain of invisible context. With C++, you’ve generally got to read a *lot* more code to be certain you’re really understanding what a given method is really doing.
And, I find that C++ programs tend to have about 50% of the complexity created by the problem domain, and about 50% added by the programmer to create some elaborate class hierarchy or network, whereas the comparable C program will be more like a 70/30 or 80/20 split, with the larger fraction of complexity due to the problem domain, and the lesser fraction introduced by the programmer. In other words, it seems to me that a typical C++ program will be carrying an extra load, and not a small load, of complexity which has nothing to do with the problem being solved.
My biggest beef with C++ is that everything it adds to C is designed
to make code easier to write, and none of the things it adds are
designed to make code easier to read.
It's designed to make it easier to not repeat yourself.
Have you ever had to work on a project where you were required to focus on branch coverage? "Never repeat yourself" becomes way more important in those cases.
That said C++ is a hybrid language. Most of the complexity of the language arises out of the desire to retain strong, static typing that's largely resolved at compile time; and to keep all of the runtime performance of C.
Ruby, Scala, Objective-C, and Smalltalk are all examples of languages that probably have a lower complexity level, while still doing all they can to support never repeating yourself.
If only I can understand the program then I will remain employed.
Everybody instincively thinks about efficiency and doing the right thing and all that, but the reality is we all have to eat and would like a roof over our head, so exactly why should I cater to the easiest thing that anybody can do?
I'm cheap. I am penny wise and pound foolish on occasiona, but it is my penny wise that allows me to be pound foolish.
People who consider themselves less well off than me don't bother returning their recyclable materials for the 5 cent deposit they paid.
Last time I returned cans. I got $17 for it. Not a lot, but lunch for a few days.
Most people wouldn't bother because it's not worth $17 to them.
To them I say "I'll happily take your $17."
Everybody instincively thinks about efficiency and doing the right
thing and all that, but the reality is we all have to eat and would
like a roof over our head, so exactly why should I cater to the easiest
thing that anybody can do?
In that case, use C++ :)
Heh. McCarthy's original LISP paper in CACM, shied away from the term "garbage collection" for some reason, even though they were already using the term at MIT by that time. They called it "reclamation" in the paper.
I have a new project going out the door soon. It's a big batch job with a multithreading requirement for performance.
Development went fine with small test cases on my workstation never consuming much memory.
Scaling it up on the production testbed was another story. Tried it with multithreading enabled for the first time. 10 threads buried the collector in garbage and it got to the point that it was spending near 100% CPU in the collector.
So, I had to do a lot of tuning. Scaled the threads back to 5, tuned the heap flags, added a bit more memory to the machine. Learned a bit about the GC options in the process.
So I went through a few iterations and found a few places where I could win back some constant scaling factors. For a while I had it down to spending 10-20% of CPU in the collector and that was "good enough." But I finally found the biggest issue and it's down to 2.5% in collection now.
Which might even be less time than a C++ program would spend doing allocation.
So, the thing about GC is when you have retention issues, you often (not always, but often) don't have unbounded growth the way you would in C/C++. There's usually an upper bound on your memory consumption (it just might happen that that upper bound is way more than your machine can handle ;)
Fucking hate sloppy languages, and this inclused non strict typing.
should have been called pacMan():
(a la death bed)
herm. that should be called hack_to_get_around_our_original_genius_hack.
Scaling it up on the production testbed was another story. Tried it
with multithreading enabled for the first time. 10 threads buried the
collector in garbage and it got to the point that it was spending near
100% CPU in the collector.
Is there any such thing as a GC environment where one thread is dedicated to garbage collection?
It's gotta be far far more efficient to just stop everybody and make them wait while you do your pass or two.
I'm sure ls will enlighten us as to how it's not that bad and you can collect while other threads are running...