Four things you need to know about Cluster Programming

Data Science   |   
Published June 23, 2017   |   

In the world of Big Data, information has evolved from being a merely passive construct into an active and dynamic abstraction. The language of this information is the computer code, but before diving into cluster programming, there are several things that you must first understand.

Amdahl’s law and the limits of parallel computation

Despite the power and promise of parallel computation, there are limits to what parallel computers can accomplish. In 1967, computer scientist Gene Amdahl presented a paper at that year’s AFIPS Spring Joint Computer Conference on what would eventually become known as “Amdahl’s Law.” Amdahl’s law describes the limit on the maximum achievable speedup for a mixed problem with both concurrent and serial components. For a computational problem that is 95% concurrent and parallelizable, with the remaining 5% that must unavoidably be computed serially, the maximum speedup will be 20, as deduced by 1/(1-0.95).

Difference between Parallel, Distributed, Cloud, Grid, and Cluster Computing

When talking about cluster computer programming, it is important to understand what cluster computing is, and how it differs from other types of computing. While a distributed computing system will make use of parallel computing, parallel computing does not necessarily require a distributed computing system. The principles of parallel computing can be found in most standard desktop computers with a multicore processor, and the average piece of software is likely written in one of the many concurrent multi-paradigm programming languages.
Parallel computing, in the sense of a distributed computer system, typically concerns the sharing of resources amongst a large number of computers connected in some sort of network. The difference between a cloud, grid, and cluster computer system concerns where those resources go, and how they work together.
In cluster computing, the resources of a distributed computer network are combined in order to service a single user or task. In contrast, cloud computing can provide many services to many different users, or for several different tasks. Depending on the source, grid computing may be structurally identical to cloud computing, but with the only real difference being at the application layer. By other sources, grid computing is structurally decentralized when sharing resources amongst the nodes in the group, whereas cloud computing tends towards being more centralized in its resource management.

MapReduce

In recent years, the MapReduce framework has been a major interest of distributed and cluster computing programmers, due to its automatic implementation of node monitoring and failure handling. The word “MapReduce” refers to the combined use of the Map() and Reduce() programming functions, as described in a 2004 paper presented by Google researchers at the Sixth Symposium on Operating System Design and Implementation. Since then, it has been integrated into the Apache Hadoop and Apache Spark cluster programming frameworks.
Of the two, Hadoop is the older framework whose main feature is its distributed data storage management, while Spark, first released in 2014, targets distributed data processing, which, for example, means that you can install spark on a system if you are interested in real-time data processing on a cluster without any the use of distributed storage.

Famous distributed computing projects

The first ever general-purpose distributed computing was Distributed.Net, which began on January 28th, 1997. The distributed computing project mainly focuses on solving computationally intensive mathematical problems in the field of cryptography. In its 20-year run, it has successfully cracked RSA Labs’ 56-bit and 64-bit secret-keys and is now working on the 72-bit variant.
Arguably, the most famous distributed computing project is SETI@home, which has spent the last 20 years searching the radio spectrum of interstellar space for signals from extraterrestrial civilizations. So far, they have not found any, but another similar project Folding@home has aided the medical computing in the causes of protein misfolding.
Although the story of cluster computing continues, you will have to wait before you hear the rest of it.