software vs. hardware

For general rambling.
Jonathan
Grand Pooh-Bah
Posts: 6722
Joined: Tue Sep 19, 2006 8:45 pm
Location: Portland, OR
Contact:

software vs. hardware

Post by Jonathan »

let's declare a winner!

seriously, though. i believe several years ago there was a widespread perception that software wasn't keeping up its end of the bargain. the march of increased cpu and memory usage had slowed or ceased. computers bought in the late nineties did everything a normal user might hope for. this was particularly true for CPUs. once you had a machine that could multitask mp3 playing with im and browser you were set.

is this now becoming untrue, again? i'm thinking along the lines of aero, complex ajaxy websites, multicore games with sexy physics. or is that 1GHz P3 still the win?

George
Veteran Doodler
Posts: 1267
Joined: Sun Jul 18, 2004 12:26 am
Location: Arlington, VA

Post by George »

It takes 30% of my AMD 2800+ to decode and display 1080i HDTV streamed from my tuner card. That's using the DxVA hardware acceleration provided by the Radeon 9800. Without acceleration (i.e., if you have a crappy integrated video card) it took closer to 90%. So, yes, there are legitimate applications that need more power. And I think Vinny had some kind of movie file that needed something powerful to play also.

VLSmooth
Tenth Dan Procrastinator
Posts: 3055
Joined: Fri Jul 18, 2003 3:02 am
Location: Varies
Contact:

Post by VLSmooth »

George wrote:And I think Vinny had some kind of movie file that needed something powerful to play also.
H.264 decompression requires quite a bit of power.

For example, a 1024x576 h264 video @ 23.98 fps takes up 45-70% of my Athlon XP 2500+. I'm sure it can exceed 70% depending on the information conveyed.

Jonathan
Grand Pooh-Bah
Posts: 6722
Joined: Tue Sep 19, 2006 8:45 pm
Location: Portland, OR
Contact:

Post by Jonathan »

In both HD and H.264, are you happy with your current CPU utilization, or is it too high?

VLSmooth
Tenth Dan Procrastinator
Posts: 3055
Joined: Fri Jul 18, 2003 3:02 am
Location: Varies
Contact:

Post by VLSmooth »

I'm happy with it for the vast majority of the time. However, comparing subtitles between multiple groups with high quality video now sucks, since my computer stutters when playing more than one simultaneously.

In the spirit of this thread, I don't see myself upgrading my processor until I actually demand more processing power. I've also recommended "bargain" processors to family since I know they won't use the power.

Noise reduction (lower power, hence lower heat, hence low/slower fans, etc) and miniturization (for portability) are currently higher priorities.

VLSmooth
Tenth Dan Procrastinator
Posts: 3055
Joined: Fri Jul 18, 2003 3:02 am
Location: Varies
Contact:

Post by VLSmooth »

Also, to state the obvious, more hardware power is still a moot point if the software implementation sucks. FFXI finds someway to push my processor to 100%, regardless if I'm running other applications or not.

George
Veteran Doodler
Posts: 1267
Joined: Sun Jul 18, 2004 12:26 am
Location: Arlington, VA

Post by George »

I'm satisfied since the only other thing running on that computer is a newsgroup downloader. But if I didn't have another computer to play games on and had to run games on one monitor while watching TV on the other, I'd need a dual-core. Of course, my other computer has a dual-core and I don't think I've loaded both simultaneously since I ran the burn-in tests months ago.
I feel like I just beat a kitten to death... with a bag of puppies.

quantus
Tenth Dan Procrastinator
Posts: 4891
Joined: Fri Jul 18, 2003 3:09 am
Location: San Jose, CA

Post by quantus »

News flash to the software industry: Single threads are basically not going to get much faster anymore.

So, yeah, I think the general feeling here is that hardware is still churning out more MIPS and FLOPS on schedule but the software industry is only just figuring out that they have to do something different if they're gonna actually be able to use them all. This is very evident in all of your comments and even a recent article on EE Times. The article complained that even EDA companies are not responding fast enough to take advantage of multi-core systems. These companies invest heavily in algorithms, yet they're still missing the boat on picking or developing the right ones to commercialize.

This all may be a moot point if Google just does all of my computing for me...
Have you clicked today? Check status, then: People, Jobs or Roads

Jason
Veteran Doodler
Posts: 1520
Joined: Fri Jul 18, 2003 12:53 am
Location: Fairfax, VA

Post by Jason »

quantus wrote:So, yeah, I think the general feeling here is that hardware is still churning out more MIPS and FLOPS on schedule but the software industry is only just figuring out that they have to do something different if they're gonna actually be able to use them all ...

This all may be a moot point if Google just does all of my computing for me...
I believe the software industry isn't smart enough to utilize multiple threads. I mean com'on, who do you know who is smart enough to develop parallelized algorithms. Also, if you assume your statement that they are 'only just figuring this out,' that tells you how dumb they are. I include myself in this statement.

Google, however, might be smart enough (e.g. map/reduce).

Dave
Tenth Dan Procrastinator
Posts: 3483
Joined: Fri Jul 18, 2003 3:40 pm

Post by Dave »

only if the government gives out tax breaks for people with multi-core processors!
It takes 43 muscles to frown and 17 to smile, but it doesn't take any to just sit there with a dumb look on your face.

George
Veteran Doodler
Posts: 1267
Joined: Sun Jul 18, 2004 12:26 am
Location: Arlington, VA

Post by George »

Depends on the industry. Defense/aerospace has been using multi-processor systems successfully for decades. And threading is one of the accepted solutions to some moderate and even hard real-time challenges.

But I agree the app and game developers don't know what they're doing.

quantus
Tenth Dan Procrastinator
Posts: 4891
Joined: Fri Jul 18, 2003 3:09 am
Location: San Jose, CA

Post by quantus »

George wrote:Depends on the industry. Defense/aerospace has been using multi-processor systems successfully for decades. And threading is one of the accepted solutions to some moderate and even hard real-time challenges.

But I agree the app and game developers don't know what they're doing.
Embedded systems programmers have had to deal with using more slower processors for years as you've pointed out. I applaud them.

I also agree with Jason that Google is smart enough. Not just for their use of the inherently multi-threaded nature of AJAX, but also they wrote their own OS and File system because they figured out damn fast that using Windows (as everyone knows) or even a vanilla Linux wouldn't scale reliably. AJAX will likely be the best example of multi-threaded programming for a while to come. The good thing about it is that many more programmers will start to get experience with programming threads.
Have you clicked today? Check status, then: People, Jobs or Roads

Jonathan
Grand Pooh-Bah
Posts: 6722
Joined: Tue Sep 19, 2006 8:45 pm
Location: Portland, OR
Contact:

Post by Jonathan »

Jeez, this thread is a year old. Time flies.

Anything new in this department? Off the top of my head, Crysis and maybe UT3. Other than that, not really?

quantus
Tenth Dan Procrastinator
Posts: 4891
Joined: Fri Jul 18, 2003 3:09 am
Location: San Jose, CA

Post by quantus »

There have been a few articles on eetimes about products coming out to speed up certain kinds of single threaded software for multi-core/processor systems. Mostly, it's for software that already is typically multi-threaded in larger-scale applications.

Here's one
Have you clicked today? Check status, then: People, Jobs or Roads

quantus
Tenth Dan Procrastinator
Posts: 4891
Joined: Fri Jul 18, 2003 3:09 am
Location: San Jose, CA

Post by quantus »

EETmes: Opinion: Time to plow multiple paths to parallel computing
You have to wonder about who is piloting the ship in Redmond these days when the company can afford a $44 billion bid for Yahoo to try to bolster its position in Web search but only spends $10 million to attack a needed breakthrough to save its core Windows business.
Wheee, more people getting impatient with the state of the software industry these days. There's a lot of blame being thrown around and a call to action to anyone and everyone with a sizable R&D budget to throw more money this way.

Anyways, I've thought a little more on this and I'm thinking that the "shoot from the hip" style of programming that many people are used to (ie. develop, test a case or two, release, sit back and wait for bug reports) is part of the reason that software is not scaling. For hardware, the verification problem of multi-core hardware is much harder than for a single core. Maybe Jonathan can comment on how much more time it takes? So, going multi-threaded probably takes a LOT more verification to get right too. If there's not much discipline to test a single threaded program already, where are we going to get even more discipline to write the multi-threaded code we're gonna need?
Have you clicked today? Check status, then: People, Jobs or Roads

Jonathan
Grand Pooh-Bah
Posts: 6722
Joined: Tue Sep 19, 2006 8:45 pm
Location: Portland, OR
Contact:

Post by Jonathan »

Multicore is no different from multiprocessor, to a first approximation. MP systems generally require some sort of formal basis as groundwork for architecting them and as a guide to verifying them. Without a formal model providing some level of guarantees in system behavior, the combinatorial state explosion is too much. I strongly suspect one or more formal models will become the basis for almost all shrink-wrapped software design. This change will be much like the way compilers took over development from assembly language. It'll be something in the neighborhood of decomposing code into tasks which land onto queues which get scheduled onto processing elements. This may or may not fit onto C++/C#; I have no idea.

On a separate note, the link to the article about Patterson's institute mentions that the US is not producing enough power EEs. This is true. The engineers of my father's generation who work for the electric company are ready to retire. There aren't any young EEs waiting to replace them, though.
Disclaimer: The postings on this site are my own and don't necessarily represent Intel's positions, strategies, or opinions.

quantus
Tenth Dan Procrastinator
Posts: 4891
Joined: Fri Jul 18, 2003 3:09 am
Location: San Jose, CA

Post by quantus »

Dwindlehop wrote:Multicore is no different from multiprocessor, to a first approximation.
Agreed. I was gonna go back and change multicore to MP, but didn't because the problem is essentially the same.
Dwindlehop wrote:MP systems generally require some sort of formal basis as groundwork for architecting them and as a guide to verifying them. Without a formal model providing some level of guarantees in system behavior, the combinatorial state explosion is too much. I strongly suspect one or more formal models will become the basis for almost all shrink-wrapped software design. This change will be much like the way compilers took over development from assembly language. It'll be something in the neighborhood of decomposing code into tasks which land onto queues which get scheduled onto processing elements. This may or may not fit onto C++/C#; I have no idea.
I wonder if the massive jump that IT is making towards SOA will get bundled up in this somehow. Calling a service is like calling another thread somewhere. One of the main problems with services as a Yahoo engineer correctly pointed out is the high overhead of the communication involved. If some work went into optimizing these calls on the same system to minimize this cost, then maybe it could be leveraged as an easy programming model for parallel computing? Still work has to go into algorithms to minimizing the amount of IPC. Distributed service management would be your queue/scheduler and the services are your processing elements. The language of the task wouldn't matter. Only a new language for the composition of the system is needed.
Dwindlehop wrote:On a separate note, the link to the article about Patterson's institute mentions that the US is not producing enough power EEs. This is true. The engineers of my father's generation who work for the electric company are ready to retire. There aren't any young EEs waiting to replace them, though.
I noticed that same thing at the end of an article too and at first I was like, why is this in an article about parallel computing?

Do you think this a career opportunity?
Have you clicked today? Check status, then: People, Jobs or Roads

Jonathan
Grand Pooh-Bah
Posts: 6722
Joined: Tue Sep 19, 2006 8:45 pm
Location: Portland, OR
Contact:

Post by Jonathan »

If you go into power distribution you won't be wanting for jobs until we decide as a society to move away from centrally generated electricity. There's also some chance that the market will value these engineers much more in the near future.

Peijen
Minion to the Exalted Pooh-Bah
Posts: 2790
Joined: Fri Jul 18, 2003 2:28 pm
Location: Irvine, CA

Post by Peijen »

quantus wrote:Anyways, I've thought a little more on this and I'm thinking that the "shoot from the hip" style of programming that many people are used to (ie. develop, test a case or two, release, sit back and wait for bug reports) is part of the reason that software is not scaling. For hardware, the verification problem of multi-core hardware is much harder than for a single core. Maybe Jonathan can comment on how much more time it takes? So, going multi-threaded probably takes a LOT more verification to get right too. If there's not much discipline to test a single threaded program already, where are we going to get even more discipline to write the multi-threaded code we're gonna need?
This has a lot to do with the cost of deploying bug fixes vs the cost of recalling faulty product.

quantus
Tenth Dan Procrastinator
Posts: 4891
Joined: Fri Jul 18, 2003 3:09 am
Location: San Jose, CA

Post by quantus »

Yes, you're right, it has some to do with the cost of putting an update into place and fixing the harm done by a faulty product. There are two issues with that and software. First, the more complex a system gets, updates have a way of spawning new, unexpected issues and software systems tend to get complex pretty quickly. As Jonathan commented, it has a lot to do with the architecture as per how well you can manage the complexity. Second, it is hard to measure the full impact of faulty products on consumers. It's easy to get a reputation for poor quality products and hard to convince people to try your products again once they have that (mis?)conception.

Anyway, as Jonathan said, the state space for verifying a multi-core/processor system is way higher than a single processor system which is probably true for multi-threaded software vs single threaded software as well. So my point is, if there's not much of a foundation of verification for software now, it may be very hard to transition to multi-threaded software.
Have you clicked today? Check status, then: People, Jobs or Roads

Post Reply