Thursday, March 28, 2024

Microsoft’s Biggest Problem

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Microsoft’s biggest problem is not its antitrust case. Its chief competitive threat doesn’t come from Apple, Nintendo, or America Online.

Its biggest problem is ignoring the rules that govern the evolution of technology — and business.

Microsoft styled itself and its products in the image of most early computers, using what came to be known as “Von Neumann architecture.”

Named for a Hungarian-American mathematician, this architecture allows one program to run through one processor to deliver one result at a time. This system was supplanted in the 1980s by massively parallel machines, consisting of simpler computers dividing up the work. When these machines were reaching their limits, around 1990, systems such as the nCUBE, which outperformed even the fastest massively parallel machines, were starting to evolve.

This activity connects with the computer in front of you through distributed-computing projects such as Intel’s Philanthropic Peer-to-Peer Program, in which your PC works in a network of other computers to — literally — find a cure for cancer.

The Internet evolved in much the same way.

In the mid-1990s, most U.S. traffic went through one of only four public network access points (NAPs). These became bottlenecks, so large Internet service providers (ISPs) created private NAPs to get around them.

This serves you via peer-to-peer systems, not only Gnutella but a whole industry in which Sun is only one participant.

As technology and the Internet evolved from unilateral entities into networked ones, so did businesses. IBM learned this in the ’60s with System 360. The more people it threw at the task of speeding the machine to market, the more the project slowed down. Microsoft’s innovation, tackling specific problems by dedicating small teams working 20-hour days, was a short-term fix.

The trend reached your company with flattened organization charts and the use of technology to push decisions to the shop floor.

Equally, in software a single center means a single point of failure. A single source is a bottleneck. This is why IBM now puts all its server eggs in the Linux basket. When a bug or virus hits Linux, IBM isn’t the only outfit on the case.

Anyone in the vast network of Linux users might solve the problem, because everyone has the source code. Microsoft software hits a snag, everyone has to wait for Microsoft to fix it. It alone controls the code.

That’s why Gartner Group told clients earlier this year to dump Microsoft’s Internet server. It’s inherently insecure.

Microsoft’s “solution” is to stop communication about security problems and force all discussions to funnel through a few “trusted” sources. A communications bottleneck won’t stop a network of threats.

There’s a lot your company and Internet operations can learn from this. When you let your people use their own networks of contacts to solve problems, problems get solved faster. When you divide work into components, it gets done faster.

Centralized organization charts don’t work as well as networked ones.

Any system with a single point of failure will, in time, fail. Have a back-up plan and don’t let yourself become dependent — on Microsoft or anyone else.

  • This column was first published on ClickZ, an internet.com site.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles