Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
A look at current methods of tackling large-scale computing tasks will undoubtedly lead you to plenty of solutions touting enough hardware to take up a few floors of your facility. Not to mention the associated costs of keeping it all running and getting it working well together.
It’s not palatable to think about, but it’s important for your business and there’s no easy way out. Thankfully there is some interesting research and development being put into Stream Processing that could considerably lower the cost of entry for some fields.
High-performance computing tasks are usually being worked on by an army of drone machines sporting commodity processors from your favorite CPU manufacturers, with concessions made for highly specialized tasks that require the odd custom chip for improved performance. This is all tied together with the interconnection technology your vendor happens to enjoy.
What this approach lacks in efficiency it more than makes up for with sheer brute force. But even this has its practical limits unless you’re in the habit of calling up IBM for a few dozen Blue Gene racks.
It’s not just for World of Warcraft anymore
The business world isn’t keen on looking at “toys” to solve major computing problems. The humble graphics chip added to personal computers so that you can enjoy a rich user interface has been undergoing some important changes over the years. As PC gamers can attest, they have become much more powerful but most couldn’t conceive of the changes under the hood that brought about new visual feasts.
Graphics API and game developers had been clamoring for specific features from video chipsets that allow them more flexibility when programming and designing their games and the odd visualization application. These much sought after programmable pipelines have allowed them to push video game graphics to new levels of realism and has given other research fields a new computing device with some rather impressive capabilities.
It’s worth noting the core makeup of a modern, high-end graphics card. It’s a specialized device whose GPU (Graphics Processing Unit) features an integrated memory controller with a wide data bus width and a fair amount of very high-speed memory. It sounds all so very technical, but suffice it to say, the modern graphics processor currently enjoys 8 to 12 times the bandwidth a modern CPU can muster from its own memory subsystem. The bottom line is that it will gorge on the information you feed it and will ask for seconds.
In general, the modern GPU core features quite a few separate shader pipelines, which allows a game’s scene to be broken up into manageable chunks for each unit to work on. As the programmability evolved and more shader units were added, video performance has greatly improved and it has allowed for increased graphics complexity.
While its working environment was designed as a thoroughbred, the GPU had its flaws for general-purpose work, namely the lack of some much-needed features. Double-precision floating point support is a recent addition to AMD’s graphics chips, which while not a major boon for the latest video games, is a very important addition if you want any sort of accuracy in your results.
Constant evolution is a big reason why the GPGPU (General-Purpose computation on GPU) may see much more interest. While a CPU platform may stick around for five or more years and see gradual improvements, it’s not uncommon for GPUs to undergo a major overhaul every two years, fast tracking the latest and greatest technical advancements to get the upper hand on the competition.
Page 2: AMD, Nvidia and Intel
All of this computational power isn’t lost on two of the largest graphics chip designers. AMD and Nvidia have begun marketing their very own line of purpose-built stream processors dedicated to data crunching.
The GPGPU’s performance isn’t lost on Intel either so they’ve been hard at work on their Larrabee GPU which is taking a completely different approach to graphics and processing elements and will also be, conveniently, targeted toward the GPGPU market. Knowing that Intel doesn’t go into an arena half-heartedly, we’re all sure to see a blitz of solutions at various price points with a top-notch development environment.
Which brings the first major flaw in this burgeoning field: a lack of a standard.
Each hardware platform is sure to have its own specialized software to ensure code is running as efficiently as possible, which would make portability a no-go. It’s a question of whether it will stay a highly specialized field or if we’ll see a broader market for this form of processing power and whether the major players will make concessions to see this move forward.
Pushing pixels and boundaries
For now, it all smells of the “fresh out of the R&D” and “not yet ready for prime time” routine, which is true with most new technologies. However, a few industries have been taking up the cause and have benefited from the horsepower in some remarkable ways. Plus, the academics and military are having a field day with their own pet projects.
You won’t be finding a ready-made solution you can deploy immediately on the market yet and there’s still serious research to be done. Plenty of major hardware revisions are also required before it becomes a ubiquitous computing platform, but the major players are fully backing this approach.
Nonetheless, there is plenty of interest in making stream processing work with early results showing significant gains. So expect the developments to come fast and furious.
This article was first published on EnterpriseITPlanet.com.
-
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
-
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
-
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
-
Top 10 AIOps Companies
FEATURE | By Samuel Greengard,
November 05, 2020
-
What is Text Analysis?
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
-
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
-
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
-
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
-
Top 10 Chatbot Platforms
FEATURE | By Cynthia Harvey,
October 07, 2020
-
Finding a Career Path in AI
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
-
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
-
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
-
Top 10 Machine Learning Companies 2020
FEATURE | By Cynthia Harvey,
September 22, 2020
-
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
-
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
-
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
-
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
-
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
-
Anticipating The Coming Wave Of AI Enhanced PCs
FEATURE | By Rob Enderle,
September 05, 2020
-
The Critical Nature Of IBM’s NLP (Natural Language Processing) Effort
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
August 14, 2020
SEE ALL
ARTICLES