Persistent Storage for a Local Deployment
I am trying to get a single-node local deployment going for a small personal project. I have Docker Desktop (WSL2) and I am using Powershell with Helm.
So far I have been able to get an instance of HPCC running and I can successfully submit code and access the environment via ECLWatch (localhost:8010) and the ECL-IDE. What I need is to be able to access files on my local machine and, preferable, write to disk and access my finished files after my work is done.
I tried following the instructions on the Containerized HPCC System Platform documentation, but I am running into some problems.
I have downloaded the helm charts from https://github.com/hpcc-systems/helm-chart and extracted the /examples folder into my folder from which I make helm calls. I then run the following script from the documentation:
This works fine and I see hpcc-localfile running with a ./helm list call.
After this, the next step should be to run open an instance of HPCC pointing to the .yaml mapping file and setting the default path to my local drive in the files created earlier (C:\hpccdata\...). I use the following code to access this:
When I run this, I see mycluster when I call ./helm list, but all of the processes in kubectl get pods never get up and running. Most of them stay at processing, with a few getting to the Running.... state, but never actually starting. I don't see any containers get created, but some of the processes have multiple restarts as time goes on. The rest never come fully online.
Am I missing a vital step from the documentation, or could there be something else going on here?
Thanks,
Matt Rumsey
So far I have been able to get an instance of HPCC running and I can successfully submit code and access the environment via ECLWatch (localhost:8010) and the ECL-IDE. What I need is to be able to access files on my local machine and, preferable, write to disk and access my finished files after my work is done.
I tried following the instructions on the Containerized HPCC System Platform documentation, but I am running into some problems.
I have downloaded the helm charts from https://github.com/hpcc-systems/helm-chart and extracted the /examples folder into my folder from which I make helm calls. I then run the following script from the documentation:
- Code: Select all
.\helm install hpcc-localfile examples/local/hpcc-localfile --set common.hostpath=/run/desktop/mnt/host/c/hpccdata
This works fine and I see hpcc-localfile running with a ./helm list call.
After this, the next step should be to run open an instance of HPCC pointing to the .yaml mapping file and setting the default path to my local drive in the files created earlier (C:\hpccdata\...). I use the following code to access this:
- Code: Select all
.\helm install mycluster hpcc/hpcc -f examples/local/values-localfile.yaml
When I run this, I see mycluster when I call ./helm list, but all of the processes in kubectl get pods never get up and running. Most of them stay at processing, with a few getting to the Running.... state, but never actually starting. I don't see any containers get created, but some of the processes have multiple restarts as time goes on. The rest never come fully online.
Am I missing a vital step from the documentation, or could there be something else going on here?
Thanks,
Matt Rumsey
- mrumsey
- Posts: 31
- Joined: Wed Apr 09, 2014 8:21 pm
Have you created the required folders under c:/hpcccdata? The example helm charts for localfile does not create the folders. Missing folders is the most common cause of this type of deployment to fail to start.
For Windows, use these commands:
This portion of the documentation is being updated. But is under review before publication.
HTH,
Jim
For Windows, use these commands:
- Code: Select all
mkdir c:\hpccdata
mkdir c:\hpccdata\dalistorage
mkdir c:\hpccdata\hpcc-data
mkdir c:\hpccdata\debug
mkdir c:\hpccdata\queries
mkdir c:\hpccdata\sasha
mkdir c:\hpccdata\dropzone
This portion of the documentation is being updated. But is under review before publication.
HTH,
Jim
- JimD
- Posts: 160
- Joined: Wed May 18, 2011 1:35 pm
I had most of them. hpccdata/debug wasn't in my documentation. Dropzone was hpccdata/mydropzone.
I'll see if the updated folders does anything.
I'll see if the updated folders does anything.
- mrumsey
- Posts: 31
- Joined: Wed Apr 09, 2014 8:21 pm
Update: I made sure those folders existed and I am still not getting an environment to start up.
This gets an environment:
This does not:
I'm not very familiar with .yaml files or helm. Could I need to adjust something? I just used the stuff located in the documentation/git repo.
This gets an environment:
- Code: Select all
.\helm install mycluster hpcc/hpcc
This does not:
- Code: Select all
.\helm install mycluster hpcc/hpcc -f examples/local/values-localfile.yaml
I'm not very familiar with .yaml files or helm. Could I need to adjust something? I just used the stuff located in the documentation/git repo.
- mrumsey
- Posts: 31
- Joined: Wed Apr 09, 2014 8:21 pm
Matt,
Thanks for calling this to our attention. I worked with you offline and you indicated that the issue is resolved and you have a running cluster with persistent local storage.
I am working on releasing an updated manual with the updated steps. I will reply to this post when that updated manual is available.
Regards,
Jim
Thanks for calling this to our attention. I worked with you offline and you indicated that the issue is resolved and you have a running cluster with persistent local storage.
I am working on releasing an updated manual with the updated steps. I will reply to this post when that updated manual is available.
Regards,
Jim
- JimD
- Posts: 160
- Joined: Wed May 18, 2011 1:35 pm
The updated manual is now available on the web site.
https://hpccsystems.com/training/docume ... d-Platform
Jim
https://hpccsystems.com/training/docume ... d-Platform
Jim
- JimD
- Posts: 160
- Joined: Wed May 18, 2011 1:35 pm
6 posts
• Page 1 of 1
Who is online
Users browsing this forum: No registered users and 1 guest