the limit of silicon chip technology as we know it is probably almost here. two ways out of this that I've heard are based on going beyond the basically 2d nature of current chips. A) deposit a whole new layer of silicon substrate and make a new chip on top of the first chip. Every time you do that gives you a multiplication of the effective density. or, B) move to a wholly new process, maybe discrete transistors held together by nanoassembled carbon tube wires.
no one can tell if this stuff will get here just in time or a little late, but one way or another the upward march of computing power will move onward!
Massive parallelism. Even if faster chips can't be invented, chips at speed x will still drop in price over time, allowing more and more of them to be sold for the same price. The challenge is in programming that parallelism and in dealing with the depressing reality that inherently sequential processes may never be sped up.
That's just a summary of what I've heard, and I'd say I have 75% confidence in its reasonableness.
ditto. This is the path we're taking in our OS development at my company. We assume CPU speeds are plateauing, but # of cores per box, across all of our product line, are likely to keep increasing. The focus is on acknowledging this & introducing some new parallelization primitives at the OS level.
It certainly looks like the direction that intel is taking is to keep adding cores.
It is THE thing coming out of academic computer architecture. Pretty much has been for the last several years. That and power consumption. I know that Google and Goldman are actively adopting massive parallelization of their software in order to take advantage of the trend, and RedHat is also adding a lot of parallelization tools to their OS and compiler libraries.
Man, you guys need some advances in the next 10-20 years that'll convince everyone to switch, before everyone starts *really* caring about performance.
(an "advance" could include "something that shows regular people that functional programming is the route to performance")
I guess you didn't get my allusion. The argument that gets advanced is "pure functional programming is the only way to go, because programming a multicore machine explicitly is impossible, and the only way compilers are going to be able to target one is if you use a language without implicit state."
Oh right, and the multicore thing will happen before the "performance improvements stop happening" thing, so we may end up in a functional paradigm before we start getting massive performance anxiety.
Part of the problem is that we can't even figure out how software will continue to develop if Moore's Law *does* hold. The current "hey, we've got lots of cores and current parallelism primitives tend to suck a lot" problem will have a solution, but we still don't know what it will look like.
So some of how programming will change depends on how it looks at the time. Had Moore's Law stopped back before OO took off, I suspect OO would have taken longer to take off -- the whole "your time is more valuable than computer cycles" thing, while still true, would have had less visceral appeal. I suspect that in general, whenever Moore's Law stops holding, you'll see a lot more focus, more even than is warranted, on the constant in front of runtimes, just because we'll have a sudden Depression-style paranoia about how this is *it*, and we don't get any more cycles *ever* (even though processors will continue to get faster, just not by as much or as quickly). I suspect it'll take a few years (maybe a decade?) to get past that mindset...
I think that massive parallelization is the answer but seriously, do you really think most computer software is heavily reliant on ever-increasing computational power to work? I think we crossed the threshold of needing more more more for the majority of apps a while ago. Anyway, specialized co-processors are the thing you will be programming for if you stay in the heavy computational math side of things. The nVidia's of the world have finally realized that market and it is being actively developed. The more interesting question to me is who is going to create the right mix of language/vm to make parallel programming a non-issue for the average developer the way that memory management is thanks to Java. This *might* be a pipe dream on my part, but I have hope that it is possible. Because my friend, your job is totally secure if everyone has to really start writing hand-tuned parallel code. Even good developers frequently suck at it.
Fuck, man, I write hand-tuned pseudo-code for a model of parallel/distributed computation which is much simpler then anything real computers implement, and its still a bitch and a half.
P.S. designing parallel algorithms is hard. Proving properties of them is fun.
Comments 15
no one can tell if this stuff will get here just in time or a little late, but one way or another the upward march of computing power will move onward!
Reply
That's just a summary of what I've heard, and I'd say I have 75% confidence in its reasonableness.
Reply
My bet, on the positive side, is that video games actually have to be fun again, instead of so damn shiny that you forget they kinda suck.
I miss the Atari 2600, and games worrying about fun before graphics.
Reply
It certainly looks like the direction that intel is taking is to keep adding cores.
Reply
Reply
Reply
(an "advance" could include "something that shows regular people that functional programming is the route to performance")
Reply
Reply
Reply
So some of how programming will change depends on how it looks at the time. Had Moore's Law stopped back before OO took off, I suspect OO would have taken longer to take off -- the whole "your time is more valuable than computer cycles" thing, while still true, would have had less visceral appeal. I suspect that in general, whenever Moore's Law stops holding, you'll see a lot more focus, more even than is warranted, on the constant in front of runtimes, just because we'll have a sudden Depression-style paranoia about how this is *it*, and we don't get any more cycles *ever* (even though processors will continue to get faster, just not by as much or as quickly). I suspect it'll take a few years (maybe a decade?) to get past that mindset...
Reply
Reply
40 years is manageable.
10 years significantly affects me life.
Reply
Anyway, specialized co-processors are the thing you will be programming for if you stay in the heavy computational math side of things. The nVidia's of the world have finally realized that market and it is being actively developed.
The more interesting question to me is who is going to create the right mix of language/vm to make parallel programming a non-issue for the average developer the way that memory management is thanks to Java. This *might* be a pipe dream on my part, but I have hope that it is possible. Because my friend, your job is totally secure if everyone has to really start writing hand-tuned parallel code. Even good developers frequently suck at it.
Reply
P.S. designing parallel algorithms is hard. Proving properties of them is fun.
Reply
Reply
Leave a comment