In response to Dwindlehop's poll, I'm not sure, but probably less than one. However, what's the chance of running ~10 hours of testbench code and forgetting to add additional noise to your inputs, forcing you to run all the tests again?
ps. That could be 10 hours while using three computers simultaneously. The "bug" could've just been caught ~1 hour ago. The results might still need to go through (many) security layers before delivery. The computers in question might not support unattended processing. The results could be due at 9 AM tomorrow.
This very well could suck.
Just a hypothetical question... of course.
Re: hell
-
- Grand Pooh-Bah
- Posts: 6722
- Joined: Tue Sep 19, 2006 8:45 pm
- Location: Portland, OR
- Contact:
Heh, how many computers have you tied up at once producing data you didn't need? I'm pretty sure the number is in the low thousands for me, but obviously that is unattended. It terms of stuff I've had to babysit, I've easily nursed stuff through multiple workdays of watching it run, crash then restart it manually. The mind boggles.
As far as not adding any experimental variable to my experiment, I've done that tons but rarely for more than a few hundred computers at once.
As far as not adding any experimental variable to my experiment, I've done that tons but rarely for more than a few hundred computers at once.
Last edited by Jonathan on Thu Oct 05, 2006 12:17 am, edited 1 time in total.
See now it doesn't have to be wasting time with a computer. At work today we were having a good joke about how four of us were working on untangling a 300ft spool of ethernet cable. Considering the salaries in that little circle, it would have been far cheaper to buy a new cable than to untangle that one (and if acquisitions didn't take months, we would have).
Your tax dollars at work.
Your tax dollars at work.

I was assigned a task that was behind schedule and so I didn't feel I had time to do unit-level testing of the code on my workstation. So, instead, all I did was get the code to the point where it would compile. Then, I'd tie up one of two benches being shared by forty or so people. And, since the code communicated with another subsystem, I'd have to have an engineer from that subsystem present along with a Systems engineer and occasionally others. And they'd all sit there for three hour bench shifts while I'd be fixing typos in my code that a five minute unit test would have found. The whole effort took six months, and I suspect we could have finished most of it in three if we'd done unit testing.
I feel like I just beat a kitten to death... with a bag of puppies.
Conversely, I wasted four shifts (12 hours) to uncover four bugs because we didn't have the right people present. Each time we found a single bug, we spent the rest of the time characterizing it. Then, after the shift, we went to the person responsible, who was able to identify and solve the problem in minutes. A week or two later when we finally got another shift, we'd test the fix in the first few minutes, then uncover another unrelated bug and repeat the cycle. So, what could have been accomplished in about an hour total took twelve hours of bench time, 24+ man-hours, and five or six weeks of schedule time.
I feel like I just beat a kitten to death... with a bag of puppies.
And, just one more. We can process data sets on our PC workstations, which is nice because we can use debuggers, etc, and don't tie up any other resources. But, they run something like 70x slower than the real system. I wanted to check that some algorithm changes I made didn't cause any unintended side effects, so I ran one data set through a new and old copy of the code. Each run took something like five hours, consumed 99% of my workstation (so I couldn't do much else), and required me to be present for the whole thing. After completing both runs, I ran the data through a comparison tool. It showed that almost every output was different. I then remembered a recalibration had occurred between the two versions, making it pointless to compare their results.
I feel like I just beat a kitten to death... with a bag of puppies.
-
- Tenth Dan Procrastinator
- Posts: 4891
- Joined: Fri Jul 18, 2003 3:09 am
- Location: San Jose, CA
Ha, you and your tests. We just write code for whatever instance of the project we're working on and if it runs and produces reasonable looking results, then it works and we assume it will work for all other projects, past and future. Once we go to use the code on some other project it's constant debugging of something. Wheee! God I wish we had test suites of some sort. Of course, we have so many points of customization for each project instance, that I doubt we could build a test suite with enough simplicity to be maintainable. I guess this means I wish we also had test driven development.