Numerous tech vendors have announced big data initiatives recently, and now Intel is getting involved. The chip maker plans to release its own distribution of the open source Hadoop software.
The Wall Street Journal’s Don Clark reported, “Intel has long been heavily involved in software, though its purpose typically is mainly to help sell computer chips. Another example is emerging Tuesday. The Silicon Valley giant is announcing plans to offer its own version of Hadoop, an increasingly popular program that helps break up applications to be run on large clusters of commodity server hardware.”
ZDNet’s Rachel King noted, “Intel is framing its deployment of the open source software framework as a ground-up approach by baking Hadoop directly into the silicon level.” She added, “Set to roll out worldwide, the Intel-Hadoop deployment will be delivered through an annual subscription with technical support through solution vendors and service providers.”
According to the Intel press release, “The optimizations made for the networking and IO technologies in the Intel Xeon processor platform also enable new levels of analytic performance. Analyzing one terabyte of data, which would previously take more than 4 hours to fully process, can now be done in 7 minutes thanks to the data-crunching combination of Intel’s hardware and the Intel Distribution. Considering Intel estimates that the world generates 1 petabyte (1,000 terabytes) of data every 11 seconds or the equivalent of 13 years of HD video, the power of Intel technology opens up the world to even greater possibilities.”
Intel also announced numerous partners for its Hadoop program, including supercomputing giant Cray, which announced that “it will introduce a new solution combining the Intel Distribution for Apache Hadoop software (Intel Distribution) with the Cray Xtreme line of supercomputers. The new offering will add to Cray’s portfolio of ‘Big Data’ solutions and give customers the ability to leverage the fusion of supercomputing and Big Data.”