HPCC (High Performance Computing Cluster) is a cluster computing platform used to solve Big Data problems. Its unique architecture and simple yet powerful data programming language (ECL) makes it a compelling solution to solve data intensive computing needs.
As a result of the information explosion, many organizations now have the need to process and analyze massive volumes of data. These data-intensive computing requirements can be addressed by scalable systems based on hardware clusters of commodity servers coupled with system software to provide a distributed file storage system, job execution environment, online query capability, parallel application processing, and parallel programming development tools. The HPCC platform provides all of these capabilities in an integrated, easy to implement and use, open source high performance computing environment.
The following section makes a strong case for selecting the HPCC Systems platform as your choice for solving Big Data problems:
Learn the basics about the platform architecture and how it all comes together.
Explore the technology and infrastructure behind the powerful HPCC platform.
Discover how the benefits of the HPCC platform can help address Big Data needs.
Recognize differences between HPCC and Hadoop and why HPCC beats the competition in every aspect.
Delve into specific case studies of actual HPCC Systems implementations.
Collaborate with others in our forums, review whitepapers and other training materials.
Download documentation, the platform and tools to kick off your journey into HPCC!