Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

IBM z13 announced; why are people surprised?

Name: Anonymous 2015-01-18 0:12

http://techcrunch.com/2015/01/13/the-new-ibm-z13-is-not-your-fathers-mainframe/

(pastebin: http://pastebin.com/TbuWu6LW )
(images: http://imgur.com/a/z4Vbs )

Yes technology wise it's all cool and all, yeah yeah big blue does it again, big data, big irons, et cetera, et cetera

But what gets me the most is the media response which seems to be "the mainframe is back" Where did it go? No where, if you look at IBM and other manufactures reports, they still have as many customers as ever, so why is technology news trying portray that mainframe computers are on their way out?

At what point did we or will we ever not need to make a massive number of calculations, do calculations of a extremely high complexity, or both, at a maximum efficiency that only specialized hardware can provide?

Is this like that "desktops are dead" FUD that shitheads who don't actually do work like to spread?

When you look at the technical reports about the new z13 it is amazing what it can do, so I'm not upset by people who are surprised by its capabilities, I am, but the large number of people surprised that mainframes even exist is troubling. I feel like it's indicative of a shift towards universal standards and if that is going to be generic ARM devices and x86 devices for everything there is very little we're going to be doing as well as we could.

Why did the trend go from trying everything and seeing what worked best for one specific task which lead to lots of technology that did lots of things REALLY well, to arbitrarily choosing one set of technologies, using it for everything and then just running said set of technologies into the ground? Did modern hardware capability spoil us so that we don't have to innovate?

Name: Cudder !MhMRSATORI 2015-01-18 15:11

>>12
No mention of the fact that memory bandwidth is now the bottleneck, instructions have to somehow be read from memory anyway, and not even a single occurrence of the word "cache"?

http://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/bibliography/index.html

The most recent reference from that article was almost 15 years ago. The period before the fall of NetBurst, when everyone was still chasing ridiculously high clock frequencies and the "RISC is the future" movement still running strong. I wish Intel didn't jump on the RISC bandwagon, since then we might've had Nehalem-level performance 5 years earlier.

>>13
More like 30 years now...

>>14
ARM has multicycle instructions, complex addressing modes, and predicated execution with uop-based decoupled front- and back-ends. It is hardly a pure RISC. MIPS is pure RISC - and it's a horrible performer:

http://www.extremetech.com/extreme/188396-the-final-isa-showdown-is-arm-x86-or-mips-intrinsically-more-power-efficient/2

"Where's that promise of simple CPUs being faster and more efficient while also cheaper to design? I'm still waiting, Hennessy..."

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List