Skip to content

Instantly share code, notes, and snippets.

@shinji62
Last active June 11, 2020 19:24
Show Gist options
  • Save shinji62/1c67988b37e662eba545d16b4733f56a to your computer and use it in GitHub Desktop.
Save shinji62/1c67988b37e662eba545d16b4733f56a to your computer and use it in GitHub Desktop.
Blobstore migration nfs to azure blob store

Purpose

This KB will help you to migrate your ERT/PAS blobstore from NFS to Azure blob storage. We use goblob a tool develop by pivotal to help for migration. This tool only support S3 compatible api storage so we add a s3 gateway (minio) in-between nfs and azure blobstorage.

Minio is a stable and simple way to achieve it.

Requirements

PCF 1.12 Goblob https://github.com/pivotal-cf/goblob
Minio Server https://minio.io/downloads.html
An Azure storage account (https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction)

Procedure

  1. Start Minio gateway

You can install Minio gateway on OpsManager as the impact is minimun and temporary

ssh ubuntu@opsmanager
mkdir migration && cd migration
wget https://dl.minio.io/server/minio/release/linux-amd64/minio
chmod +x minio

export MINIO_ACCESS_KEY=your_azure_account_name
export MINIO_SECRET_KEY=your_azure_account_key
./minio gateway azure

Then minio should listen on port 9000

  1. Disable application push

In order to avoid people pushing you can disable it using feature flags

cf disable-feature-flag app_bits_upload
Setting status of app_bits_upload as admin...

OK

Feature app_bits_upload Disabled.

If you prefer to be more strict you can still stop CloudController

  1. Migrate using GoBlob

From nfs_server

bosh -d cf-deployment ssh nfs_server/0

First make sure we can access minio

curl -s -k -o /dev/null -w "%{http_code}" http://OpsManagerIP:9000/minio/health/live

should return 200

Run Goblob from the nfs_server If you can access to internet from the VM

sudo su
cd /home/vcap
wget https://github.com/pivotal-cf/goblob/releases/download/v1.4.0/goblob-linux && mv goblob-linux goblob

If ERT/PAS Vm could not access to internet You can download / scp to OpsManager

ssh ubuntu@opsmanager
cd migration
wget https://github.com/pivotal-cf/goblob/releases/download/v1.4.0/goblob-linux && mv goblob-linux goblob

bosh -d cf-manifest scp ./goblob nfs_server/0:/home/vcap

Run GoBlob From nfs_server

sudo su
cd /home/vcap
chmod +x goblob

./goblob migrate \
  --blobstore-path="/var/vcap/store/shared" \
  --s3-endpoint="http://opsmanagerip:9000" \
  --s3-accesskey="your_azure_account_name" \
  --s3-secretkey="your_azure_account_key" \
  --buildpacks-bucket-name="pcf-buildpacks" \
  --droplets-bucket-name="pcf-droplets" \
  --packages-bucket-name="pcf-packages" \
  --resources-bucket-name="pcf-resources" \
  --use-multipart-uploads \
  --disable-ssl
  
 Migrating from NFS to S3

cc-buildpacks ......... done.
cc-droplets .. done.
cc-packages . done.
cc-resources  done.

Took 1m43.749974961s

Migrated files:    8
Already migrated:  4
Failed to migrate: 0

You can run this command again to be sure everything have been uploaded.

  1. Change settings in opsmanager

In Opsmanager>ERT/PAS>File Storage choose External Azure Storage then complete the required settings accordly to the one use with Goblob.

  1. Apply changes

Do not run smoke-test right now as push is disable.

  1. Re-enable application push
cf enable-feature-flag app_bits_upload
Setting status of app_bits_upload as admin...

OK

Feature app_bits_upload Enabled.
  1. Run ERT/PAS smoke-test

In Opsmanager>ERT/PAS>Errands change smoke-test to ON. Then apply-changes

Notes

From 1.12 when you changes blobstore type in ERT/PAS tile NFS will be automatically deleted.

@farukhkhan123
Copy link

farukhkhan123 commented Jun 11, 2020

Hello,
I am using the instructions provided above to migrate the ERT/PAS blobstore over to Minio Internal Blobstore and facing an issue, actual blobstore size on NFS VM is

nfs_server/8d0a7def-e088-42f3-b295-0332dc5e3464:/var/vcap/store/shared/cc-resources# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 99G 27G 67G 29% /var/vcap/store

Migration runs ok without reporting an errors, however when i login to Minio VMs and check blobstore size it shows
minio-server/49683b25-34a3-4b3f-b912-684108cce944:/var/vcap/store/minio-server/resources$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdd1 50G 8.2G 39G 18% /var/vcap/store

Going through each bucket and comparing sizes i am seeing significant discrepancies. Content is missing from almost all of the buckers and looks like only 1/4th of the content was migrated over. Any ideas on what could be causing this?

Only change i added is Step #1, setting up Gateway on Opsman VM. Since out OM deployment is pretty locked up, getting binary onto VM is not easy (scp & internet access) are blocked. I am using Minio LB as the api endpoint in the command below

./goblob migrate --blobstore-path="/var/vcap/store/shared" --s3-endpoint="http://" --s3-accesskey="minio" --s3-secretkey="xxxxx" --buildpacks-bucket-name="buildpacks" --droplets-bucket-name="droplets" --packages-bucket-name="packages" --resources-bucket-name="resources" --use-multipart-uploads --disable-ssl

I am seeing the buckets getting loaded with correct data, however its incomplete data. Ran the same multiple times but the results show there isn't any files to be migrated. Please advise.

Took 7m24.482917017s

Migrated files: 0
Already migrated: 4049
Failed to migrate: 0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment