A look at current methods of tackling large-scale computing tasks will undoubtedly lead you to plenty of solutions touting enough hardware to take up a few floors of your facility. Not to mention the associated costs of keeping it all running and getting it working well together.
It’s not palatable to think about, but it’s important for your business and there’s no easy way out. Thankfully there is some interesting research and development being put into Stream Processing that could considerably lower the cost of entry for some fields.
High-performance computing tasks are usually being worked on by an army of drone machines sporting commodity processors from your favorite CPU manufacturers, with concessions made for highly specialized tasks that require the odd custom chip for improved performance. This is all tied together with the interconnection technology your vendor happens to enjoy.
What this approach lacks in efficiency it more than makes up for with sheer brute force. But even this has its practical limits unless you’re in the habit of calling up IBM for a few dozen Blue Gene racks.
It’s not just for World of Warcraft anymore
The business world isn’t keen on looking at “toys” to solve major computing problems. The humble graphics chip added to personal computers so that you can enjoy a rich user interface has been undergoing some important changes over the years. As PC gamers can attest, they have become much more powerful but most couldn’t conceive of the changes under the hood that brought about new visual feasts.
Graphics API and game developers had been clamoring for specific features from video chipsets that allow them more flexibility when programming and designing their games and the odd visualization application. These much sought after programmable pipelines have allowed them to push video game graphics to new levels of realism and has given other research fields a new computing device with some rather impressive capabilities.
It’s worth noting the core makeup of a modern, high-end graphics card. It’s a specialized device whose GPU (Graphics Processing Unit) features an integrated memory controller with a wide data bus width and a fair amount of very high-speed memory. It sounds all so very technical, but suffice it to say, the modern graphics processor currently enjoys 8 to 12 times the bandwidth a modern CPU can muster from its own memory subsystem. The bottom line is that it will gorge on the information you feed it and will ask for seconds.
In general, the modern GPU core features quite a few separate shader pipelines, which allows a game’s scene to be broken up into manageable chunks for each unit to work on. As the programmability evolved and more shader units were added, video performance has greatly improved and it has allowed for increased graphics complexity.
While its working environment was designed as a thoroughbred, the GPU had its flaws for general-purpose work, namely the lack of some much-needed features. Double-precision floating point support is a recent addition to AMD’s graphics chips, which while not a major boon for the latest video games, is a very important addition if you want any sort of accuracy in your results.
Constant evolution is a big reason why the GPGPU (General-Purpose computation on GPU) may see much more interest. While a CPU platform may stick around for five or more years and see gradual improvements, it’s not uncommon for GPUs to undergo a major overhaul every two years, fast tracking the latest and greatest technical advancements to get the upper hand on the competition.
Page 2: AMD, Nvidia and Intel
All of this computational power isn’t lost on two of the largest graphics chip designers. AMD
The GPGPU’s performance isn’t lost on Intel either so they’ve been hard at work on their Larrabee GPU which is taking a completely different approach to graphics and processing elements and will also be, conveniently, targeted toward the GPGPU market. Knowing that Intel doesn’t go into an arena half-heartedly, we’re all sure to see a blitz of solutions at various price points with a top-notch development environment.
Which brings the first major flaw in this burgeoning field: a lack of a standard.
Each hardware platform is sure to have its own specialized software to ensure code is running as efficiently as possible, which would make portability a no-go. It’s a question of whether it will stay a highly specialized field or if we’ll see a broader market for this form of processing power and whether the major players will make concessions to see this move forward.
Pushing pixels and boundaries
For now, it all smells of the “fresh out of the R&D” and “not yet ready for prime time” routine, which is true with most new technologies. However, a few industries have been taking up the cause and have benefited from the horsepower in some remarkable ways. Plus, the academics and military are having a field day with their own pet projects.
You won’t be finding a ready-made solution you can deploy immediately on the market yet and there’s still serious research to be done. Plenty of major hardware revisions are also required before it becomes a ubiquitous computing platform, but the major players are fully backing this approach.
Nonetheless, there is plenty of interest in making stream processing work with early results showing significant gains. So expect the developments to come fast and furious.
This article was first published on EnterpriseITPlanet.com.