keeps me awake at night

Feb 21, 2008 00:24

Has anyone read anything interesting/insightful about what will happen to software after Moore's law ceases to hold ( Read more... )

Leave a comment

Comments 15

soong February 21 2008, 12:59:26 UTC
the limit of silicon chip technology as we know it is probably almost here. two ways out of this that I've heard are based on going beyond the basically 2d nature of current chips. A) deposit a whole new layer of silicon substrate and make a new chip on top of the first chip. Every time you do that gives you a multiplication of the effective density. or, B) move to a wholly new process, maybe discrete transistors held together by nanoassembled carbon tube wires.

no one can tell if this stuff will get here just in time or a little late, but one way or another the upward march of computing power will move onward!

Reply


tyedie February 21 2008, 13:44:13 UTC
Massive parallelism. Even if faster chips can't be invented, chips at speed x will still drop in price over time, allowing more and more of them to be sold for the same price. The challenge is in programming that parallelism and in dealing with the depressing reality that inherently sequential processes may never be sped up.

That's just a summary of what I've heard, and I'd say I have 75% confidence in its reasonableness.

Reply

talldean February 21 2008, 14:17:58 UTC
Looking at the failure of folks to want to develop for the PS3, this doesn't give me hope.

My bet, on the positive side, is that video games actually have to be fun again, instead of so damn shiny that you forget they kinda suck.

I miss the Atari 2600, and games worrying about fun before graphics.

Reply

etiberius February 21 2008, 17:18:52 UTC
ditto. This is the path we're taking in our OS development at my company. We assume CPU speeds are plateauing, but # of cores per box, across all of our product line, are likely to keep increasing. The focus is on acknowledging this & introducing some new parallelization primitives at the OS level.

It certainly looks like the direction that intel is taking is to keep adding cores.

Reply

skamille February 22 2008, 00:20:26 UTC
It is THE thing coming out of academic computer architecture. Pretty much has been for the last several years. That and power consumption. I know that Google and Goldman are actively adopting massive parallelization of their software in order to take advantage of the trend, and RedHat is also adding a lot of parallelization tools to their OS and compiler libraries.

Reply


catamorphism February 21 2008, 14:57:08 UTC
Either (a) pure functional programming or (b) functional logic programming is going to take over the world. Duh.

Reply

r_transpose_p February 21 2008, 23:25:51 UTC
Man, you guys need some advances in the next 10-20 years that'll convince everyone to switch, before everyone starts *really* caring about performance.

(an "advance" could include "something that shows regular people that functional programming is the route to performance")

Reply

catamorphism February 22 2008, 01:16:14 UTC
I guess you didn't get my allusion. The argument that gets advanced is "pure functional programming is the only way to go, because programming a multicore machine explicitly is impossible, and the only way compilers are going to be able to target one is if you use a language without implicit state."

Reply

r_transpose_p February 22 2008, 03:40:32 UTC
Oh right, and the multicore thing will happen before the "performance improvements stop happening" thing, so we may end up in a functional paradigm before we start getting massive performance anxiety.

Reply


angelbob February 21 2008, 16:38:04 UTC
Part of the problem is that we can't even figure out how software will continue to develop if Moore's Law *does* hold. The current "hey, we've got lots of cores and current parallelism primitives tend to suck a lot" problem will have a solution, but we still don't know what it will look like.

So some of how programming will change depends on how it looks at the time. Had Moore's Law stopped back before OO took off, I suspect OO would have taken longer to take off -- the whole "your time is more valuable than computer cycles" thing, while still true, would have had less visceral appeal. I suspect that in general, whenever Moore's Law stops holding, you'll see a lot more focus, more even than is warranted, on the constant in front of runtimes, just because we'll have a sudden Depression-style paranoia about how this is *it*, and we don't get any more cycles *ever* (even though processors will continue to get faster, just not by as much or as quickly). I suspect it'll take a few years (maybe a decade?) to get past that mindset...

Reply

hmmm r_transpose_p February 21 2008, 23:23:56 UTC
So my fear is primarily economic ( ... )

Reply

Re: hmmm r_transpose_p February 21 2008, 23:28:03 UTC
Oh wait, the optimistic "number of years before processing power becomes proportional to processor mass" might be 40 years under that reasoning...

40 years is manageable.

10 years significantly affects me life.

Reply


skamille February 22 2008, 00:26:27 UTC
I think that massive parallelization is the answer but seriously, do you really think most computer software is heavily reliant on ever-increasing computational power to work? I think we crossed the threshold of needing more more more for the majority of apps a while ago.
Anyway, specialized co-processors are the thing you will be programming for if you stay in the heavy computational math side of things. The nVidia's of the world have finally realized that market and it is being actively developed.
The more interesting question to me is who is going to create the right mix of language/vm to make parallel programming a non-issue for the average developer the way that memory management is thanks to Java. This *might* be a pipe dream on my part, but I have hope that it is possible. Because my friend, your job is totally secure if everyone has to really start writing hand-tuned parallel code. Even good developers frequently suck at it.

Reply

r_transpose_p February 22 2008, 01:14:17 UTC
Fuck, man, I write hand-tuned pseudo-code for a model of parallel/distributed computation which is much simpler then anything real computers implement, and its still a bitch and a half.

P.S. designing parallel algorithms is hard. Proving properties of them is fun.

Reply

eub February 22 2008, 07:26:03 UTC
My Firefox could use a ton more computational power so as to get GMail up to the speed of mh over a 19.2 modem. Sob.

Reply


Leave a comment

Up