Skip to main content

Tech Talk Blog ImageOn August 2, 2018, HPCC Systems hosted the latest edition of The Download: Tech Talks. This series of workshops is specifically designed for the community by the community with the goal to share knowledge, spark innovation, and further build and link the relationships within our HPCC Systems community.  In this very special edition of Tech Talks, we are featuring some of our 2018 summer interns and the exciting work they are doing.

Watch Tech Talk  

Links to resources mentioned in Tech Talk 16:

Presentation

Community Forums

Agricultural Robot Platform Tests

HPCC Systems ROC Packages

MPI Project

MPI Project Issue Tracker

MPI Project Source Repository

           

Episode Guest Speakers and Subjects

Shah Muhammad Hamdi, PhD student studying Computer Science (Data Mining) at Georgia State University - Dimensionality Reduction and Feature Selection in ECL-ML

Shah Muhammad Hamdi is a Ph.D. student (3rd year) in the Department of Computer Science of Georgia State University. He works in Data Mining Lab (DMLab) under the supervision of Dr. Rafal Angryk. His research interests are machine learning, data mining and deep learning, more specifically, finding interesting patterns from real-life graphs and time series data. His research finds applications in the fields of solar weather analysis and neurological disease prediction. Before joining the DMLab for the Ph.D., he worked one year as a Lecturer in Computer Science in Northern University Bangladesh, Dhaka, Bangladesh. He received his Bachelor degree in Computer Science in 2014 from Rajshahi University of Engineering and Technology (RUET), Rajshahi, Bangladesh.

Robert Kennedy, PhD student in Computer Science at Florida Atlantic University - Parallel Distributed Deep Learning on HPCC Systems

Robert Kennedy is a first year Ph.D. student in CS at Florida Atlantic University with research interests in Deep Learning and parallel and distributed computing. His current research is in improving distributed deep learning by implementing and optimizing distributed algorithms.

Aramis Tanelus, high school student studying at American Heritage School of Boca/Delray, Florida - Developing HPCC Systems Data Ingestion APIs for Common Robotic Sensors

Aramis Tanelus is a programmer and senior at American Heritage High Schools where he is the lead programmer for the Advanced Robotics Team. He works with the ROS operating system collecting data from robots developed by the team and turns it into actionable output to help with robotic tasks. He currently is an intern in Boca Raton working on a project to develop software interfaces between robotic sensors and the HPCC Systems platform.

Saminda Wijeratne, Masters student studying Computational Science and Engineering at Georgia Institute of Technology, Atlanta - MPI Proof of Concept

Saminda Wijeratne is a Masters student in the Department of Computational Science and Engineering of Georgia Institute of Technology. His area of research is High Performance Computing in Distributed clusters and AI areas such as ML and Neural networks. His adviser is Dr. Srinivas Aluru professor and co-Executive Director in Georgia Tech IRI in Data Engineering and Science. Saminda was a senior software engineer for 3 years in the industry at WSO2, an open source technology provider which offers an enterprise platform. He obtained his Bachelor of Computer Science from University of Moratuwa, Sri Lanka.

Key Discussion Topics:

1:18 – Jessica Lorti provides community updates:

Reminder: 2018 HPCC Systems Community Day, Atlanta

• Poster abstracts are due September 7

• Sponsor packages still available

• Workshop & Poster Competition on October 8

• Main event on October 9

• Registration is open to our external Community!

• Visit hpccsystems.com/hpccsummit2018


5:07 - Shah Muhammad Hamdi, PhD student studying Computer Science (Data Mining) at Georgia State University - Dimensionality Reduction and Feature Selection in ECL-ML

Dimensionality reduction and feature selection are very important tools for any machine learning library, which help in compression and visualization of high-dimensional data, and improving the performance of supervised/unsupervised learning algorithms. In this presentation, Shah discusses the parallel implementation of Principal Component Analysis (PCA) using the Parallel Block Basic Linear Algebra Subsystem (PBblas) library. Additionally, Shah discusses the ECL implementations of some feature selection algorithms for the HPCC Systems platform.

22:22 - Q&A

Q: Can you tell us about the helper functions you coded for being used in PBblas?

A: I have customized ECL-ML decomposition, singular value decomposition, and QR composition for use in PBblas. Additionally, I have developed several utility functions on PBblas that can be used in any mission learning project such as the functions for computing the L2-norm of a vector for the normal format matrix, identity matrix of a given set of a vectors of ones, zeroes or any constant sometimes when the length is given a dartboard of two vectors and so on.

Q: The classical PCA approach, which is actually the eigen decomposition of the covariance matrix modified with PBblas library, or have you developed some other approach for PCA calculation from literature?

A: We have implemented both approaches. First, we learned about the classical PCA implemented in ECL-ML which is the singular value decomposition, or eigen decomposition of the covariance matrix and modified it with the PBblas based parallel matrix operations. In this approach, the results are exactly the same as the MATLAB implementation.

Secondly, we followed the implicit cholesky factorization for SVD from a published paper. In this approach, we calculate the cholesky decomposition of the covariance matrix by the existing PBblas function and then we used the QR decompensation function, customized for PBblas. In a convergence loop, in the end we get the singular values and similar vectors which are almost the same as the MATLAB implementation.

Q: Where does PCA have the most value in the real world, big data applications?

A: I think in any kind of a supervised and unsupervised learning task, but I believe in the image community it was used most frequently. PCA is an old technique, but it is still used frequently not only in the image of the PCA, but also in speech recognition and test mining too. 

Q: Are you planning to implement any other dimensionality reduction methods?

A: Yes, we are. PCA is a linear dimensionality reduction technique. Currently, we are focusing on the nonlinear methods, more powerful dimensionality reduction techniques and we have kernel PCA.  Right now, we are focusing on implementing the kernel PCA.

If you have additional questions, please contact Shah Muhammad Hamdi.

 

29:43 - Robert Kennedy, PhD student in Computer Science at Florida Atlantic University - Parallel Distributed Deep Learning on HPCC Systems

The training process for modern deep neural networks requires big data and large amounts of computational power. In this discussion, Robert covers what he implemented during his summer internship. Combining HPCC Systems and Google’s TensorFlow, Robert created a parallel stochastic gradient descent algorithm to provide a basis for future deep neural network research and to enhance HPCC System’s distributed neural network training capabilities.

55:30 - Q&A

Q: Does HPCC Systems server side require a NVidia video card to support Tensor Flow library?

A:  No, the current implementation is independent of NVidia and CUDA. The implementation that I presented is all via CPU.

Q: Does each cluster have the whole data set or the data set is randomly split and distributed to each cluster? Do you plan to test your implementation on GPU clusters?

A: Yes, the cluster has the whole data set, but each individual slave node does not. If there is a 10-node system, 10-nodes to the cluster, each node would only have one 10th of the data.

The dataset is randomly split and distributed to each cluster, so the distribution of the whole data set needs to be maintained. For example, if you have a 10-class dataset with each class equally represented when you distribute it across your system, each nodes partition would need to have the same distribution of that class so each one of the classes will be equally represented.

As for the question regarding testing my implementation on GPU clusters, I have not, but it is a good idea.

Q: What was the biggest challenge in your project?

A: The biggest challenge was the integration of ECL and python. The debugging was very challenging and having to code in two very different languages at the same time was difficult, but it was very rewarding once everything started falling into place.

Q: What are the benefits of combining HPCC Systems and Tensor Flow?

A: The benefits of combining HPCC Systems and Tensor Flow is Tensor Flow is very popular to open source.  So, combining them means any update or any improvement to the deep learning libraries that we put into Tensor Flow would immediately be available to this implementation.

Q: Is the implementation dependent on Tensor Flow, what happens to the work if Tensor Flow suddenly becomes obsolete or otherwise unusable?

A: No, the implementation would not be affected if Tensor Flow somehow disappeared tomorrow.

If you have additional questions, please contact Robert Kennedy.

 

1:02:05 - Aramis Tanelus, high school student studying at American Heritage School of Boca/Delray, Florida - Developing HPCC Systems Data Ingestion APIs for Common Robotic Sensors
Aramis’s project will make it easy for anyone in robotics around the world to ingest data from common robotic sensors into an HPCC Systems platform for use in data analysis. In this Tech Talk, Aramis will be speaking about his work on the autonomous agricultural robot and implementing new packages for the Robotics Operating System to interface with HPCC Systems for big data analysis.

1:13:10 - Q&A

Q: What computer language do you use when creating ROS packages?

A: When creating packages for ROS, you can use C++ and Python. Those are the two officially supported languages. However, there are also unofficial ports for Java, list, and a few other languages if you are interested.

Q:  What computer onboard is on the autonomous agricultural robot?

A:  On the agricultural robot, there is a RoboRio and also and Nvidia Jetson TK1. The RoboRio is something we use in First Robotics Competition to control the robot and it's being used to interface with all of the hardware, like the motors, motor controllers along with a few of the sensors and also, it's to receive commands from the external drivers station. It is what we use to turn the robot on and off. As for the Jetson TK1, it's a single board computer developed by Nvidia and we're using it to run the path as written for the robot on Robot Operating System and also handle some of the heavier lifting like combining data from all the sensors and also doing things like localization or mapping.

Q: How do you compare ECL with the other languages you've learned? Were there any challenges that you encountered in learning it?

A:  ECL is not similar to any other languages I have learned previously and getting used to the structure of ECL was a challenge.

Q: What inspired you to work with HPCC Systems and ROS? Is anyone else doing this?

A:  I was inspired by my mentors with HPCC Systems.

Q: Is your library in a ROS repository and would you be able to provide us with a link?

A: My package is in a ROS repository and I'm planning on publishing it through GitHub and also through the ROS build farm. At the moment I don't have a link, however really soon I will have one and I'll probably publicize it through my blog.

If you have additional questions, please contact Aramis Tanelus.

 

1:19:24 - Saminda Wijeratne, Masters student studying Computational Science and Engineering at Georgia Institute of Technology, Atlanta - MPI Proof of Concept
The communication backbone of HPCC Systems connects all the different components and worker nodes in a way that each task in the system is accomplished quickly and seamlessly. The built-in "Message Passing" library in HPCC Systems is designed to handle these communications among dissimilar components and perform non-trivial communication patterns among them. In this part of the Tech Talk, Saminda will explore how this library currently operates and how we can introduce a different implementation such as an existing popular library called MPI.

1:36:50- Q&A

Q: How would your implementation handle cases where HPCC Systems nodes in a cluster goes down or becomes unresponsive?

A: When I think about the API framework, it is actually these scenarios where the node goes down or a node loses connectivity, it tends to revise the node then bring it back up. If the API framework can't do that, it means something is wrong and we would be able to at least solve that.  I don't have to implement that; the API framework should take care of it. That's one of the really good advantages of using the API framework.

Q:  Can I use the MPI implementation in your source code repository to run Thor right now?

A: At the current state you cannot. The reason is the way the current implementation of Thor works with adjacent implementation and the way MPI triggers thoughts of applications is a different variety. For example, in the current implementation when Thor starts up it manually starts separate processes for masters and slaves and they need to start their own servers with different ports and in API this is sort of automatic. You just call the MPI, MPI run, command and provide the applications to run and help me run. It'll automatically start them up with their own internet ports that communicate. So this needs to be set up if you know that it's a sort of simplification. We need to do some simplification compared to what's happening right now but you do have to do some changes. At the moment out of the box you can just run the test data in the MPI library which would represent most of the scenarios.

Q:  If you're using both the implementations, can a message coming through MPI be used to the MP implementation?

­­­A:  Yes. The current implementation supports that we simply have the same message. We create the same message and pass it along to the MP library without knowing whether it's the MPI implementation or the MP implementation in the background. So, whatever the message that goes through or comes back should work, but actually will work for any either implementation.

If you have additional questions, please contact Saminda Wijeratne.

 

Have a new success story to share? We would welcome you to be a speaker at one of our upcoming The Download: Tech Talks episodes.

  • Want to pitch a new use case?
  • Have a new HPCC Systems application you want to demo?
  • Want to share some helpful ECL tips and sample code?
  • Have a new suggestion for the roadmap?

 

Be a featured speaker for an upcoming episode! Email your idea to Techtalks@hpccsystems.com

Visit The Download Tech Talks wiki for more information about previous speakers and topics.