IK.AM

@making's tech note


Pivotal Application Service (PAS) 2.4をCLIでAzureにインストールするメモ

🗃 {Dev/PaaS/CloudFoundry/PCF}
🏷 Azure 🏷 Cloud Foundry 🏷 Pivotal Cloud Foundry 🏷 Ops Manager 🏷 PAS 
🗓 Updated at 2019-05-22T04:10:25Z  🗓 Created at 2019-05-21T15:09:40Z   🌎 English Page

PAS 2.4をAzureにインストールするメモです。

Pivotal Application Service (PAS) 2.4をCLIでAWSにインストールするメモのAzure版です。

一部のスクリーンショット画面上、AWSのロゴのままになっていますが、実際はAzureです。

本作業はAzure Cloud Shell上で行います。

image

目次

CLIのインストール

まずはインストールフォルダ$HOME/clouddrive/binを作成し、環境変数PATH$HOME/.bashrcに設定します。

mkdir -p $HOME/clouddrive/bin
echo 'export PATH=$PATH:$HOME/clouddrive/bin' >> $HOME/.bashrc
source $HOME/.bashrc

以下のコマンドをインストールします。

wget -O $HOME/clouddrive/bin/om https://github.com/pivotal-cf/om/releases/download/1.0.0/om-linux
chmod +x $HOME/clouddrive/bin/om 

wget -O $HOME/clouddrive/bin/pivnet https://github.com/pivotal-cf/pivnet-cli/releases/download/v0.0.58/pivnet-linux-amd64-0.0.58
chmod +x $HOME/clouddrive/bin/pivnet

wget -O $HOME/clouddrive/bin/az-automation https://github.com/genevieve/az-automation/releases/download/v0.2.0/az-automation-v0.2.0-linux-amd64
chmod +x $HOME/clouddrive/bin/az-automation

wget -O $HOME/clouddrive/bin/bosh https://github.com/cloudfoundry/bosh-cli/releases/download/v5.5.0/bosh-cli-5.5.0-linux-amd64
chmod +x $HOME/clouddrive/bin/bosh

wget -O $HOME/clouddrive/bin/yj https://github.com/sclevine/yj/releases/download/v4.0.0/yj-linux
chmod +x $HOME/clouddrive/bin/yj

以下のCLIはAzure Cloud Shellにインストール済であり、次のバージョンで確認しています(同じである必要はありません)。

$ terraform --version
Terraform v0.11.13
+ provider.azurerm v1.28.0
+ provider.random v2.1.2
+ provider.tls v2.0.1

$ jq --version
jq-1.5-1-a5b5cbe

$ cf --version
cf version 6.44.0+5de0f0d02.2019-05-01

TerraformでAzure環境作成

Terraformのtemplateを取得します。この記事の内容は86ffb6d10c2718742126414b1c5e3bf5b4dc4fb9で確認しています。

mkdir install-pas
cd install-pas

git clone https://github.com/pivotal-cf/terraforming-azure.git template

az-automationでTerraform用のAzureアカウント作成します。

az-automation \
  --account $(az account show --query id -o tsv) \
  --identifier-uri http://tf.pas \
  --display-name tf-pas \
  --credential-output-file creds.vars

生成されたcreds.varsを用いて、terraform.tfvarsを作成します。

ops_manager_amiPivotal NetworkPivotal Cloud Foundry Ops Manager YAML for Azureから取得可能です。2.4の最新版のイメージを取得するのがオススメです。

terraform.tfvarsを作成
sed -e 's/= /= "/g' -e 's/$/"/g' creds.vars > terraform.tfvars
cat <<EOF >> terraform.tfvars
env_name = "pas"
env_short_name = "pas"
location = "Japan East"
ops_manager_image_uri = "https://opsmanagersoutheastasia.blob.core.windows.net/images/ops-manager-2.4-build.202.vhd"
dns_suffix = "ik.am"
vm_admin_username = "admin"

# Let's encryptを使うか
# ↓のスクリプトで自己署名証明書を生成
# https://raw.githubusercontent.com/aws-quickstart/quickstart-pivotal-cloudfoundry/master/scripts/gen_ssl_certs.sh
# ./gen_ssl_certs.sh exmple.com apps.exmple.com sys.exmple.com uaa.sys.exmple.com login.sys.exmple.com

ssl_cert=<<EOD
-----BEGIN CERTIFICATE-----
************************* (Let's encryptの場合、fullchain.pemの内容)
-----END CERTIFICATE-----
EOD
ssl_private_key=<<EOD
-----BEGIN PRIVATE KEY-----
************************* (Let's encryptの場合は、privkey.pemの内容)
-----END PRIVATE KEY-----
EOD

EOF

env_namedns_suffixは想定しているドメイン名に合わせて設定してください。

OpsManagerのドメイン名はpcf.<env_name>.<dns_suffix>になります。 PASの例えば、アプリケーションドメイン名はapps.<env_name>.<dns_suffix>、システムドメイン名はsys.<env_name>.<dns_suffix>になります。

例えば、アプリケーションドメイン名をapps.xyz.example.comにしたい場合はenv_name="xyz"dns_suffix=example.comに アプリケーションドメイン名をapps.foo.barにしたい場合はenv_name="foo"dns_suffix=barにしてください。

TLS証明書を取得する場合は、*.<env_name>.<dns_suffix>*.apps.<env_name>.<dns_suffix>*.sys.<env_name>.<dns_suffix>(Spring Cloud Services for PCFまたはSingle Sign-On for PCFを使う場合は*.uaa.sys.<env_name>.<dns_suffix>*.login.sys.<env_name>.<dns_suffix>も追加)を対象に含めてください。

用意されているTerraformのテンプレートを少し修正します。

Terraformテンプレートを修正
# TCP Router用のLBを削除(今回はTCP Routerはインストールしない)
sed -i.bk 's|resource "azurerm_public_ip" "tcp-lb-public-ip"|/* resource "azurerm_public_ip" "tcp-lb-public-ip"|' template/modules/pas/tcplb.tf
echo '*/' >> template/modules/pas/tcplb.tf

# MySQL用のLBを削除(PAS 2.4以降では不要)
sed -i.bk 's|resource "azurerm_lb" "mysql"|/* resource "azurerm_lb" "mysql"|' template/modules/pas/mysqllb.tf
echo '*/' >> template/modules/pas/mysqllb.tf

# 上記変更に伴うDNSレコードの修正
sed -i.bk 's|resource "azurerm_dns_a_record" "mysql"|/* resource "azurerm_dns_a_record" "mysql"|g' template/modules/pas/dns.tf
echo '*/' >> template/modules/pas/dns.tf

# 上記変更に伴うoutputsの修正
sed -i.bk 's/${azurerm_lb.tcp.name}//g' template/modules/pas/outputs.tf
sed -i.bk 's/tcp.${azurerm_dns_a_record.tcp.zone_name}//g' template/modules/pas/outputs.tf
sed -i.bk 's/${azurerm_lb.mysql.name}//g' template/modules/pas/outputs.tf
sed -i.bk 's/mysql.${azurerm_dns_a_record.mysql.zone_name}//g' template/modules/pas/outputs.tf

# ゴミ削除

rm -f `find . -name '*.bk'`

念の為、差分を確認します。

cd template
echo "======== Diff =========="
git diff | cat
echo "========================"
cd -

Terraformを実行します。

terraform init template/terraforming-pas
terraform plan -out plan template/terraforming-pas
terraform apply plan

30分くらいでPASのインストールに必要なAzureリソースが作成されます。

image

  • Virtual Network
  • Network Interface
  • Network security group
  • Public IP Address
  • Load Balancer
  • Storage Account
  • Disk
  • Virtual Machine
  • DNS Zone
  • Blobstore

DNSのNSレコードの値をドメイン管理側に設定します。

DNSの設定が反映されれば、https://pcf.<env>.<dns_suffix>でOpsManagerにアクセスできます

image

全てCLIで設定するのでブラウザでは操作しないでください。

OpsManagerの設定

omコマンドでOpsManagerの初期設定を行います。

OpsManager管理者ユーザーを作成します。usernameやpasswordは適当に変更してください。

cat <<EOF >> $HOME/.bashrc
export OM_DECRYPTION_PASSPHRASE=pasw0rd
export OM_PASSWORD=pasw0rd
export OM_SKIP_SSL_VALIDATION=true
export OM_TARGET=$(terraform output ops_manager_dns)
export OM_USERNAME=admin
EOF

source $HOME/.bashrc

om configure-authenticationコマンドでOpsManagerに設定できます。

om configure-authentication \
   --username ${OM_USERNAME} \
   --password ${OM_PASSWORD} \
   --decryption-passphrase ${OM_DECRYPTION_PASSPHRASE}

次の作業は必須ではありませんが、Let's Encryptなどで本物の証明書を使っている場合はOpsManagerにもその証明書を設定できます(デフォルトでは自己署名証明書が使われます)。

# terraform.tfvarsに設定した証明書を使いたい場合は↓を実行
terraform output ssl_cert > cert.pem
terraform output ssl_private_key > private.pem

om update-ssl-certificate \
  --certificate-pem="$(cat cert.pem)" \
  --private-key-pem="$(cat private.pem)"

rm -f *.pem

OpsManagerにアクセスするとログイン画面が表示されます。

image

管理者ユーザーでログインできます。

image

この時点でのコンポーネント図は次のようになります。

image

PlantUML(参考)
@startuml
package "infrastructure" {
  package "10.0.8.0/26" {
    node "Ops Manager"
  }
}

package "main" {
  package "10.0.0.0/22" {
  }
}

package "services" {
  package "10.0.4.0/22" {

  }
}

database "Azure Blobstore"

rectangle "web-lb"
rectangle "ssh-lb"

boundary "ops-manager-ip"
boundary "web-lb-ip"
boundary "ssh-lb-ip"

actor Operator #green

[ops-manager-ip] .down.> [Ops Manager]
[web-lb-ip] .down.> [web-lb]
[ssh-lb-ip] .down.> [ssh-lb]

Operator -[#green]--> [ops-manager-ip]
@enduml

BOSH Directorの設定

PAS(Cloud Foundry)のインストール、アップデート、死活監視、自動復旧を担うBOSH Directorをインストールするための設定を行います。 必要な情報はterraformのoutputに含まれているので、terraformのoutputからBOSH Directorを設定するためのパラメータのYAML(vars.yml)を生成します。

vars.ymlの生成
export TF_DIR=$(pwd)
mkdir director
cd director

export SUBSCRIPTION_ID=$(terraform output --state=${TF_DIR}/terraform.tfstate subscription_id)
export TENANT_ID=$(terraform output --state=${TF_DIR}/terraform.tfstate tenant_id)
export CLIENT_ID=$(terraform output --state=${TF_DIR}/terraform.tfstate client_id)
export CLIENT_SECRET=$(terraform output --state=${TF_DIR}/terraform.tfstate client_secret)
export RESOURCE_GROUP_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate pcf_resource_group_name)
export BOSH_STORAGE_ACCOUNT_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate bosh_root_storage_account)
export DEPLOYMENTS_STORAGE_ACCOUNT_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate ops_manager_storage_account)
export DEFAULT_SECURITY_GROUP=$(terraform output --state=${TF_DIR}/terraform.tfstate bosh_deployed_vms_security_group_name)
export OPS_MANAGER_SSH_PUBLIC_KEY=$(terraform output --state=${TF_DIR}/terraform.tfstate ops_manager_ssh_public_key | sed 's/^/      /')
export OPS_MANAGER_SSH_PRIVATE_KEY=$(terraform output --state=${TF_DIR}/terraform.tfstate ops_manager_ssh_private_key | sed 's/^/      /')
export NETWORK_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate network_name)
export MANAGEMENT_SUBNET_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate management_subnet_name)
export MANAGEMENT_SUBNET_CIDRS=$(terraform output --state=${TF_DIR}/terraform.tfstate -json | jq -r '.management_subnet_cidrs.value[0]')
export MANAGEMENT_SUBNET_GATEWAY=$(terraform output --state=${TF_DIR}/terraform.tfstate management_subnet_gateway)
export MANAGEMENT_RESERVED_IP_RANGES="$(echo $MANAGEMENT_SUBNET_CIDRS | sed 's|0/26$|1|g')-$(echo $MANAGEMENT_SUBNET_CIDRS | sed 's|0/26$|9|g')"
export PAS_MAIN_SUBNET_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate pas_subnet_name)
export PAS_SUBNET_CIDRS=$(terraform output --state=${TF_DIR}/terraform.tfstate -json | jq -r '.pas_subnet_cidrs.value[0]')
export PAS_SUBNET_GATEWAY=$(terraform output --state=${TF_DIR}/terraform.tfstate pas_subnet_gateway)
export PAS_RESERVED_IP_RANGES="$(echo $PAS_SUBNET_CIDRS | sed 's|0/22$|1|g')-$(echo $PAS_SUBNET_CIDRS | sed 's|0/22$|9|g')"
export SERVICES_SUBNET_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate services_subnet_name)
export SERVICES_SUBNET_CIDRS=$(terraform output --state=${TF_DIR}/terraform.tfstate -json | jq -r '.services_subnet_cidrs.value[0]')
export SERVICES_SUBNET_GATEWAY=$(terraform output --state=${TF_DIR}/terraform.tfstate services_subnet_gateway)
export SERVICES_RESERVED_IP_RANGES="$(echo $SERVICES_SUBNET_CIDRS | sed 's|0/22$|1|g')-$(echo $SERVICES_SUBNET_CIDRS | sed 's|0/22$|9|g')"
export OPS_MGR_TRUSTED_CERTS=""

cat <<EOF > vars.yml
subscription_id: ${SUBSCRIPTION_ID}
tenant_id: ${TENANT_ID}
client_id: ${CLIENT_ID}
client_secret: ${CLIENT_SECRET}
resource_group_name: ${RESOURCE_GROUP_NAME}
bosh_storage_account_name: ${BOSH_STORAGE_ACCOUNT_NAME}
deployments_storage_account_name: ${DEPLOYMENTS_STORAGE_ACCOUNT_NAME}
default_security_group: ${DEFAULT_SECURITY_GROUP}
ops_manager_ssh_public_key: |
${OPS_MANAGER_SSH_PUBLIC_KEY}
ops_manager_ssh_private_key: |
${OPS_MANAGER_SSH_PRIVATE_KEY}
network_name: ${NETWORK_NAME}
management_subnet_name: ${MANAGEMENT_SUBNET_NAME}
management_subnet_cidrs: ${MANAGEMENT_SUBNET_CIDRS}
management_subnet_gateway: ${MANAGEMENT_SUBNET_GATEWAY}
management_reserved_ip_ranges: ${MANAGEMENT_RESERVED_IP_RANGES}
pas_main_subnet_name: ${PAS_MAIN_SUBNET_NAME}
pas_subnet_cidrs: ${PAS_SUBNET_CIDRS}
pas_subnet_gateway: ${PAS_SUBNET_GATEWAY}
pas_reserved_ip_ranges: ${PAS_RESERVED_IP_RANGES}
services_subnet_name: ${SERVICES_SUBNET_NAME}
services_subnet_cidrs: ${SERVICES_SUBNET_CIDRS}
services_subnet_gateway: ${SERVICES_SUBNET_GATEWAY}
services_reserved_ip_ranges: ${SERVICES_RESERVED_IP_RANGES}
ops_mgr_trusted_certs: ${OPS_MGR_TRUSTED_CERTS}
EOF

次にBOSH Directorの設定ファイル(config.yml)を作成します。

config.ymlの生成
cat <<EOF > config.yml
# Networkの定義
networks-configuration:
  icmp_checks_enabled: false
  networks:
  # BOSH Directorをインストールするネットワーク
  - name: pas-infrastructure-network
    subnets:
    - iaas_identifier: ((network_name))/((management_subnet_name))
      cidr: ((management_subnet_cidrs))
      reserved_ip_ranges: ((management_reserved_ip_ranges))
      dns: "168.63.129.16"
      gateway: ((management_subnet_gateway))
  # PAS含む主なProductをインストールするネットワーク
  - name: pas-main-network
    subnets:
    - iaas_identifier: ((network_name))/((pas_main_subnet_name))
      cidr: ((pas_subnet_cidrs))
      reserved_ip_ranges: ((pas_reserved_ip_ranges))
      dns: "168.63.129.16"
      gateway: ((pas_subnet_gateway))
  # オンデマンドサービス(MySQL, PCCなど)をインストールするネットワーク
  - name: pas-services-network
    subnets:
    - iaas_identifier: ((network_name))/((services_subnet_name))
      cidr: ((services_subnet_cidrs))
      reserved_ip_ranges: ((services_reserved_ip_ranges))
      dns: "168.63.129.16"
      gateway: ((services_subnet_gateway))
# BOSH Directorをどのネットワークに配置するかの設定
network-assignment:
  network:
    name: pas-infrastructure-network
# 対象のソフトウェア(ここではBOSH Director)の詳細設定
properties-configuration:
  iaas_configuration:
    subscription_id: ((subscription_id))
    tenant_id: ((tenant_id))
    client_id: ((client_id))
    client_secret: ((client_secret))
    resource_group_name: ((resource_group_name))
    bosh_storage_account_name: ((bosh_storage_account_name))
    cloud_storage_type: storage_accounts
    cloud_storage_type: managed_disks
    storage_account_type: Standard_LRS
    default_security_group: ((default_security_group))
    ssh_public_key: ((ops_manager_ssh_public_key))
    ssh_private_key: ((ops_manager_ssh_private_key))
    environment: AzureCloud
  director_configuration:
    ntp_servers_string: "0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org,3.pool.ntp.org"
    resurrector_enabled: true
    post_deploy_enabled: true
    database_type: internal
    blobstore_type: local
  security_configuration:
    trusted_certificates: "((ops_mgr_trusted_certs))"
    vm_password_type: generate
  syslog_configuration: {}
resource-configuration:
  director:
    instance_type:
      id: automatic
  compilation:
    instance_type:
      id: automatic
# VMごとに設定可能な、IaaS固有の拡張項目
# 設定可能な項目は次のドキュメント参照
# https://bosh.io/docs/azure-cpi/#resource-pools
# 次の形式で定義
# - name: extension名
#   cloud_properties:
#     property名: 値
vmextensions-configuration: []
EOF

config.ymlvars.ymlを使ってBOSH Directorの設定を行います。

cd ..

om configure-director \
   --config director/config.yml \
   --vars-file director/vars.yml
出力結果
started configuring director options for bosh tile
finished configuring director options for bosh tile
started configuring availability zone options for bosh tile
finished configuring availability zone options for bosh tile
started configuring network options for bosh tile
finished configuring network options for bosh tile
started configuring network assignment options for bosh tile
finished configuring network assignment options for bosh tile
started configuring resource options for bosh tile
applying resource configuration for the following jobs:
    compilation
    director
finished configuring resource options for bosh tile

image

オレンジ色だった箇所が緑色になれば設定完了です。

VM Typeの修正

必須ではないですが、オススメです。 presetされているvm_typeの修正と、vm_typeを追加を行います。

ここでは

  • Standard_DS1_v2のメモリサイズを4GBに修正
  • Standard_DS2_v2のメモリサイズを8GBに修正
  • Standard_B1sを追加
  • Standard_B1msを追加
  • Standard_B2sを追加
# まずは既存の設定をダウンロードします。
om curl \
   --silent \
   --path /api/v0/vm_types > vm_types.json

# 差分情報をYAML Patch形式で表現
cat <<EOF > ops.yml
- type: replace
  path: /vm_types/name=Standard_DS1_v2?
  value: 
    cpu: 1
    ephemeral_disk: 51200
    name: Standard_DS1_v2
    ram: 4096
- type: replace
  path: /vm_types/name=Standard_DS2_v2?
  value: 
    cpu: 2
    ephemeral_disk: 102400
    name: Standard_DS2_v2
    ram: 8192
- type: replace
  path: /vm_types/name=Standard_B1s?
  value: 
    cpu: 1
    ephemeral_disk: 4096
    name: Standard_B1s
    ram: 1024
- type: replace
  path: /vm_types/name=Standard_B1ms?
  value: 
    cpu: 1
    ephemeral_disk: 4096
    name: Standard_B1ms
    ram: 2048
- type: replace
  path: /vm_types/name=Standard_B2s?
  value: 
    cpu: 2
    ephemeral_disk: 8192
    name: Standard_B2s
    ram: 4096
EOF


bosh int vm_types.json | yj -yj | jq . > vm_types_old.json
bosh int vm_types.json -o ops.yml | yj -yj | jq . > vm_types_new.json
diff vm_types_old.json vm_types_new.json | cat

変更後のvm_types_new.jsonを適用します。

om curl \
   --silent \
   --request PUT \
   --path /api/v0/vm_types \
   --data "$(cat vm_types_new.json)"

rm -f ops.yml vm_types*.json

PASの設定

まずはPASをPivotal Networkからダウンロードします。

PIVNET_TOKENはPivotal NetworkのProfileページから取得できます。

PIVNET_TOKEN=***********************************************
pivnet login --api-token=${PIVNET_TOKEN}
出力結果
Logged-in successfully

pivnet CLIでPASをダウンロードします。ダウンロードしたいバージョンは-rで指定します。

pivnet product-filesでダウンロード対象のファイル群を確認できます。

pivnet product-files -p elastic-runtime -r 2.4.7
出力結果
+--------+--------------------------------+----------------+---------------------+------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
|   ID   |              NAME              |  FILE VERSION  |      FILE TYPE      |                              SHA256                              |                                       AWS OBJECT KEY                                        |
+--------+--------------------------------+----------------+---------------------+------------------------------------------------------------------+---------------------------------------------------------------------------------------------+
| 277712 | PCF Pivotal Application        |            2.4 | Open Source License |                                                                  | product-files/elastic-runtime/open_source_license_cf-2.4.0-build.360-bbfc877-1545144449.txt |
|        | Service v2.4 OSL               |                |                     |                                                                  |                                                                                             |
| 337679 | GCP Terraform Templates 0.74.0 | 0.74.0         | Software            | 63677828b2eb1cea194b0c027a6be2df7eec2608460c3a576b3d3ddeaa847d64 | product-files/elastic-runtime/terraforming-gcp-0.74.0.zip                                   |
| 351857 | Azure Terraform Templates      | 0.40.0         | Software            | c5fe3e6b9be6e9d9cf754c26d0c8a25739911c9cd0bf225e4f7190599f3b2477 | product-files/elastic-runtime/terraforming-azure-0.40.0.zip                                 |
|        | 0.40.0                         |                |                     |                                                                  |                                                                                             |
| 351840 | AWS Terraform Templates 0.37.0 | 0.37.0         | Software            | e4d71ad8836a886c57657701db64da20f2c0279877e3c44cc3fd038750b3be96 | product-files/elastic-runtime/terraforming-aws-0.37.0.zip                                   |
| 375604 | Small Footprint PAS            | 2.4.7-build.16 | Software            | cc3d228e9e12f31eb6d3282d667425dfad89dccf5a2a48a5ddb4926ea3f2995b | product-files/elastic-runtime/srt-2.4.7-build.16.pivotal                                    |
| 371315 | CF CLI 6.44.0                  | 6.44.0         | Software            | 89109aea65a0a42099b967003ead34fc445831b39127ff8fd84ae28716fd5ee1 | product-files/elastic-runtime/cf-cli-6.44.0.zip                                             |
| 375592 | Pivotal Application Service    | 2.4.7-build.16 | Software            | 9f9add761522ea23194ba4730a9bb514a272e42247f0de541cc7c7b434e549dd | product-files/elastic-runtime/cf-2.4.7-build.16.pivotal                                     |
+--------+--------------------------------+----------------+---------------------+------------------------------------------------------------------+---------------------------------------------------------------------------------------------+

.pivotalで終わるファイルがPASの実体です(srtの方はSmall Footprint Runtime版です)。pivnet donwload-product-filesコマンドでダウンロードできます。

cd /tmp
pivnet download-product-files -p elastic-runtime -r 2.4.7 --glob=cf-*.pivotal
cd -

ダウンロードしたファイルをOpsManagerにアップロードします。

om upload-product -p /tmp/cf-2.4.7-build.16.pivotal

アップロードされたプロダクトはOpsManagerの左側に表示されます。

image

アップロードされたプロダクトをステージングします。(画面上で"+"ボタンを押すだけですが、あえてCLIで実行します。)

export PRODUCT_NAME=cf
export PRODUCT_VERSION=2.4.7

om stage-product -p ${PRODUCT_NAME} -v ${PRODUCT_VERSION}

PASの設定画面が出ます。オレンジ色なので設定未完です。

image

BOSH Directorの設定と同様に、terraformのoutputからPASを設定するためのパラメータのYAML(vars.yml)を生成します。

vars.ymlの生成
export TF_DIR=$(pwd)
mkdir pas
cd pas

export SYSTEM_DOMAIN=$(terraform output --state=${TF_DIR}/terraform.tfstate sys_domain)
export PAS_MAIN_NETWORK_NAME=pas-main-network
export PAS_SERVICES_NETWORK_NAME=pas-services-network
export APPS_DOMAIN=$(terraform output --state=${TF_DIR}/terraform.tfstate apps_domain)
export SYSTEM_DOMAIN=$(terraform output --state=${TF_DIR}/terraform.tfstate sys_domain)
export WEB_LB_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate web_lb_name)
export SSH_LB_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate diego_ssh_lb_name)
export TCP_LB_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate tcp_lb_name)
export CF_STORAGE_ACCOUNT_NAME=$(terraform output --state=${TF_DIR}/terraform.tfstate cf_storage_account_name)
export CF_STORAGE_ACCESS_KEY=$(terraform output --state=${TF_DIR}/terraform.tfstate cf_storage_account_access_key)
export CF_STORAGE_BUILDPACKS_STORAGE_CONTAINER=$(terraform output --state=${TF_DIR}/terraform.tfstate cf_buildpacks_storage_container)
export CF_STORAGE_DROPLETS_STORAGE_CONTAINER=$(terraform output --state=${TF_DIR}/terraform.tfstate cf_droplets_storage_container)
export CF_STORAGE_PACKAGES_STORAGE_CONTAINER=$(terraform output --state=${TF_DIR}/terraform.tfstate cf_packages_storage_container)
export CF_STORAGE_RESOURCES_STORAGE_CONTAINER=$(terraform output --state=${TF_DIR}/terraform.tfstate cf_resources_storage_container)
export CERT_PEM=`cat <<EOF | sed 's/^/  /'
$(terraform output --state=${TF_DIR}/terraform.tfstate ssl_cert)
EOF
`
export KEY_PEM=`cat <<EOF | sed 's/^/  /'
$(terraform output --state=${TF_DIR}/terraform.tfstate ssl_private_key)
EOF
`

cat <<EOF > vars.yml
cert_pem: |
${CERT_PEM}
key_pem: |
${KEY_PEM}
availability_zone_names: ${AVAILABILITY_ZONE_NAMES}
pas_main_network_name: ${PAS_MAIN_NETWORK_NAME}
pas_services_network_name: ${PAS_SERVICES_NETWORK_NAME}
availability_zones: ${AVAILABILITY_ZONES}
singleton_availability_zone: ${SINGLETON_AVAILABILITY_ZONE}
apps_domain: ${APPS_DOMAIN}
system_domain: ${SYSTEM_DOMAIN}
web_lb_name: ${WEB_LB_NAME}
ssh_lb_name: ${SSH_LB_NAME}
tcp_lb_name: ${TCP_LB_NAME}
cf_storage_account_name: ${CF_STORAGE_ACCOUNT_NAME}
cf_storage_account_access_key: ${CF_STORAGE_ACCESS_KEY}
cf_buildpacks_storage_container: ${CF_STORAGE_BUILDPACKS_STORAGE_CONTAINER}
cf_droplets_storage_container: ${CF_STORAGE_DROPLETS_STORAGE_CONTAINER}
cf_packages_storage_container: ${CF_STORAGE_PACKAGES_STORAGE_CONTAINER}
cf_resources_storage_container: ${CF_STORAGE_RESOURCES_STORAGE_CONTAINER}
smtp_from: ${SMTP_FROM}
smtp_address: ${SMTP_ADDRESS}
smtp_port: ${SMTP_PORT}
smtp_username: ${SMTP_USERNAME}
smtp_password: ${SMTP_PASSWORD}
smtp_enable_starttls: ${SMTP_ENABLE_STARTTLS}
EOF

次にPASの設定ファイル(config.yml)を作成します。

config.ymlの生成
cat <<EOF > config.yml
product-name: cf
product-properties:
  # Systemドメイン
  .cloud_controller.system_domain:
    value: ((system_domain))
  # Appsドメイン
  .cloud_controller.apps_domain:
    value: ((apps_domain))
  # TLS終端の設定。ここではGoRouterでTerminationする。
  .properties.routing_tls_termination:
    value: router
  # HA Proxyの設定(使用しないので無視)
  .properties.haproxy_forward_tls:
    value: disable
  .ha_proxy.skip_cert_verify:
    value: true
  # GoRouterで使用するTLS証明書
  .properties.networking_poe_ssl_certs:
    value:
    - name: pas-wildcard
      certificate:
        cert_pem: ((cert_pem))
        private_key_pem: ((key_pem))
  # Traffic Controllerのポート番号
  .properties.logger_endpoint_port:
    value: 443
  # PAS 2.4の新機能Dynamic Egress Policiesを有効にする
  # https://docs.pivotal.io/pivotalcf/2-4/installing/highlights.html#dynamic-egress
  .properties.experimental_dynamic_egress_enforcement:
    value: 1
  # By default, Azure load balancer times out at 240 seconds without sending a TCP RST to clients, so as an exception, Pivotal recommends a value lower than 240 to force the load balancer to send the TCP RST.
  .router.frontend_idle_timeout:
    value: 239
  .properties.security_acknowledgement:
    value: X
  .properties.secure_service_instance_credentials:
    value: true
  .properties.cf_networking_enable_space_developer_self_service:
    value: true
  .uaa.service_provider_key_credentials:
    value:
      cert_pem: ((cert_pem))
      private_key_pem: ((key_pem))
  .properties.credhub_key_encryption_passwords:
    value: 
    - name: key1
      key: 
        secret: credhubsecret1credhubsecret1
      primary: true
  .mysql_monitor.recipient_email:
    value: notify@example.com
  # システムデータベースの設定。ここではExternal(RDSなど)ではなくInternal MySQL(Percona XtraDB Cluster)を選択。
  .properties.system_database:
    value: internal_pxc
  # PXCへの通信をTLSにする
  .properties.enable_tls_to_internal_pxc:
    value: 1
  # UAAのデータベースをInternalにする
  .properties.uaa_database:
    value: internal_mysql
  # UAAのデータベースをInternalにする
  .properties.credhub_database:
    value: internal_mysql 
  # Blobstoreの設定。ここではExternal(Azure Blobstore)を選択。
  .properties.system_blobstore:
    value: external_azure
  .properties.system_blobstore.external_azure.account_name:
    value: ((cf_storage_account_name))
  .properties.system_blobstore.external_azure.access_key:
    value: 
      secret: ((cf_storage_account_access_key))
  .properties.system_blobstore.external_azure.buildpacks_container:
    value: ((cf_buildpacks_storage_container))
  .properties.system_blobstore.external_azure.droplets_container:
    value: ((cf_droplets_storage_container))
  .properties.system_blobstore.external_azure.packages_container:
    value: ((cf_packages_storage_container))
  .properties.system_blobstore.external_azure.resources_container:
    value: ((cf_resources_storage_container))
  .properties.autoscale_instance_count:
    value: 1
  # NotificationのためのSMTPの設定(オプション)
  .properties.smtp_from:
    value: ((smtp_from))
  .properties.smtp_address:
    value: ((smtp_address))
  .properties.smtp_port:
    value: ((smtp_port))
  .properties.smtp_credentials:
    value:
      identity: ((smtp_username))
      password: ((smtp_password))
  .properties.smtp_enable_starttls_auto:
    value: ((smtp_enable_starttls))
  # PAS 2.4の新機能Metric Registrarを有効にする
  # https://docs.pivotal.io/pivotalcf/2-4/installing/highlights.html#metric-registrar
  .properties.metric_registrar_enabled:
    value: 1
network-properties:
  # PASのデプロイ先ネットワーク
  network:
    name: ((pas_main_network_name))
  # オンデマンドサービスのデプロイ先ネットワーク(PASでは使われない)
  service_network:
    name: ((pas_services_network_name))
  # スケールアウト可能なコンポーネントのデプロイ対象AZ (PAS 2.4 on AzureではAZ未対応)
  other_availability_zones: "null"
  # スケールアウトできないコンポーネントのデプロイ対象AZ (PAS 2.4 on AzureではAZ未対応)
  singleton_availability_zone:
    name: "null"
# VM毎の設定(インスタンス数、VM Type、VM Extension)
resource-config:
  nats:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  backup_restore:
    instances: 0
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  diego_database:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  uaa:
    instances: 1
    instance_type:
      id: Standard_DS1_v2
    additional_vm_extensions: []
  cloud_controller:
    instances: 1
    instance_type:
      id: Standard_DS1_v2
    additional_vm_extensions: []
  ha_proxy:
    instances: 0
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  router:
    instances: 1
    instance_type:
      id: Standard_B1s
    elb_names:
    - ((web_lb_name))
    additional_vm_extensions: []
  mysql_monitor:
    instances: 0
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  clock_global:
    instances: 1
    instance_type:
      id: Standard_DS1_v2
    additional_vm_extensions: []
  cloud_controller_worker:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  diego_brain:
    instances: 1
    instance_type:
      id: Standard_B1s
    elb_names:
    - ((ssh_lb_name))
    additional_vm_extensions: []
  diego_cell:
    instances: 1
    instance_type:
      id: Standard_DS12_v2
    additional_vm_extensions: []
  loggregator_trafficcontroller:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  syslog_adapter:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  syslog_scheduler:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  doppler:
    instances: 1
    instance_type:
      id: Standard_DS1_v2
    additional_vm_extensions: []
  tcp_router:
    instances: 0
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  credhub:
    instances: 1
    instance_type:
      id: Standard_DS2_v2
    additional_vm_extensions: []
  # Intenal MySQLを使う場合の定義
  mysql_proxy:
    instances: 1
    instance_type:
      id: Standard_B1s
    additional_vm_extensions: []
  mysql:
    instances: 1
    instance_type:
      id: Standard_F2s
    persistent_disk:
      size_mb: "10240"
    additional_vm_extensions: []
  # Intenal Blobstoreを使う場合の定義
  nfs_server:
    instances: 0
    instance_type:
      id: Standard_B2s
    persistent_disk:
      size_mb: "51200"
    additional_vm_extensions: []
  # PAS 2.4以降では使用しない
  consul_server:
    instances: 0
EOF

config.ymlvars.ymlを使ってPASの設定を行います。

cd ..
om configure-product \
  --config ./pas/config.yml \
  --vars-file ./pas/vars.yml
出力結果
configuring product...
setting up network
finished setting up network
setting properties
finished setting properties
applying resource configuration for the following jobs:
  backup_restore
  clock_global
  cloud_controller
  cloud_controller_worker
  credhub
  diego_brain
  diego_cell
  diego_database
  doppler
  ha_proxy
  loggregator_trafficcontroller
  mysql
  mysql_monitor
  mysql_proxy
  nats
  nfs_server
  router
  syslog_adapter
  syslog_scheduler
  tcp_router
  uaa
errands are not provided, nothing to do here
finished configuring product

全てのプロパティはom staged-config -p cf -c -rで確認できます。

PASのデプロイ

設定を適用してBOSH DirectorおよびPASをインストールします。

om apply-changes

終わるまで待ちます。2時間くらいかかります...

image

完了したら"Changes Applied"が表示されます。

image

上記の設定では16個のVMが作成されます。

image

ここまでのコンポーネント図は次のようになります。

image

PlantUML(参考)
@startuml
package "infrastructure" {
  package "10.0.8.0/26" {
    node "Ops Manager"
    node "BOSH Director"
  }
}

package "main" {
  package "10.0.0.0/22" {
    node "NATS"
    node "Router"
    package "MySQL" {
      node "MySQL Proxy"
      database "MySQL Server"
    }
    package "CAPI" {
      node "Cloud Controller"
      node "Clock Global"
      node "Cloud Controller Worker"
    }
    package "Diego" {   
      node "Diego Brain"
      node "DiegoCell" {
         (app3)
         (app2)
         (app1)
      }
      node "Diego BBS"
    }
    package "Loggregator" {
      node "Loggregator Trafficcontroller"
      node "Syslog Adapter"
      node "Syslog Scheduler"
      node "Doppler Server"
    }
    node "UAA"
    node "CredHub"
  }
}

package "services" {
  package "10.0.4.0/22" {

  }
}

database "Azure Blobstore"

rectangle "web-lb"
rectangle "ssh-lb"

boundary "ops-manager-ip"
boundary "web-lb-ip"
boundary "ssh-lb-ip"

node firehose
actor User #red
actor Developer #blue
actor Operator #green

[ops-manager-ip] -down-> [Ops Manager]
[web-lb-ip] -down-> [web-lb]
[ssh-lb-ip] -down-> [ssh-lb]

User -[#red]--> [web-lb-ip]
Developer -[#blue]--> [web-lb-ip] : "cf push"
Developer -[#magenta]--> [ssh-lb-ip] : "cf ssh"
Operator -[#green]--> [ops-manager-ip]

[Ops Manager] .> [BOSH Director] :bosh
[BOSH Director] .down.> main :create-vm
[BOSH Director] .down.> services :create-vm

[web-lb] -[#red]-> Router
[web-lb] -[#blue]-> Router
[ssh-lb] -[#magenta]-> [Diego Brain]

Router -[#red]-> app1
Router -[#blue]-> [Cloud Controller]
Router -[#blue]-> [UAA]
[Diego Brain] -[#magenta]-> app2
[Cloud Controller] --> [MySQL Proxy]
[Cloud Controller] <-left-> [Cloud Controller Worker]
[Cloud Controller] <-right-> [Clock Global]
[Cloud Controller] --> [Azure Blobstore]
[Clock Global] <-> [Diego BBS]
[UAA] --> [MySQL Proxy]
[CredHub] -up-> [MySQL Proxy]
[Diego BBS] --> [MySQL Proxy]
[MySQL Proxy] --> [MySQL Server]
[Diego Brain] <-> [Diego BBS]
[Diego Brain] <--> DiegoCell
DiegoCell -up-> NATS 
DiegoCell -> CredHub 
NATS -up-> Router
[Doppler Server] --> [Loggregator Trafficcontroller]
[Loggregator Trafficcontroller] -right-> [Syslog Adapter]
[Syslog Adapter] -up-> [Syslog Scheduler]
[Loggregator Trafficcontroller] --> firehose

Diego .> [Doppler Server] : metrics
CAPI .> [Doppler Server] : metrics
Router .> [Doppler Server] : metrics
app1 ..> [Doppler Server] : log&metrics
app2 ..> [Doppler Server] : log&metrics
app3 ..> [Doppler Server] : log&metrics
@enduml

PASへのログイン

"Pivotal Application Service"タイルの"Credentials"タブから"UAA"の"Admin Credentials"にadminユーザーのパスワードが記載されています。

image

image

omコマンドでも取得可能です。

ADMIN_PASSWORD=$(om credentials -p cf -c .uaa.admin_credentials --format json | jq -r .password)

cf loginコマンドでログインしてください。APIサーバーのURLはhttps://api.$(terraform output sys_domain)です。

API_URL=https://api.$(terraform output sys_domain)

cf login -a ${API_URL} -u admin -p ${ADMIN_PASSWORD}

Apps Managerへのアクセス

Apps ManagerのURLはhttps://apps.$(terraform output sys_domain)です。ブラウザでアクセスしてください。

image

cf loginと同じアカウントでログインしてください。

image

PASを使ってみる

サンプルアプリをダウンロードしてください。

wget https://gist.github.com/making/fca49149aea3a7307b293685ba20c7b7/raw/6daab9a0a88fe0f36072ca4d1ee622d2354f3505/pcf-ers-demo1-0.0.1-SNAPSHOT.jar

cf push <アプリ名> -p <jarのファイルパス>でJavaアプリケーションをPASにデプロイします。

cf push attendees -p pcf-ers-demo1-0.0.1-SNAPSHOT.jar -m 768m
出力結果
admin としてアプリ attendees を組織 system / スペース system にプッシュしています...
アプリ情報を取得しています...
これらの属性でアプリを作成しています...
+ 名前:       attendees
  パス:       /private/tmp/pcf-ers-demo1-0.0.1-SNAPSHOT.jar
+ メモリー:   768M
  経路:
+   attendees.apps.pas.bosh.tokyo

アプリ attendees を作成しています...
経路をマップしています...
ローカル・ファイルをリモート・キャッシュと比較しています...
Packaging files to upload...
ファイルをアップロードしています...
 681.36 KiB / 681.36 KiB [============================================================================================================================================================================================================================] 100.00% 1s

API がファイルの処理を完了するのを待機しています...

アプリをステージングし、ログをトレースしています...
   Downloading binary_buildpack...
   Downloading ruby_buildpack...
   Downloading staticfile_buildpack...
   Downloading java_buildpack_offline...
   Downloading python_buildpack...
   Downloaded ruby_buildpack
   Downloaded staticfile_buildpack
   Downloading go_buildpack...
   Downloaded java_buildpack_offline
   Downloading php_buildpack...
   Downloading nodejs_buildpack...
   Downloaded python_buildpack
   Downloading dotnet_core_buildpack...
   Downloaded binary_buildpack
   Downloaded php_buildpack
   Downloaded go_buildpack
   Downloaded nodejs_buildpack
   Downloaded dotnet_core_buildpack
   Cell 7835f3b0-4951-4d38-a70f-1cf3d60fd9ab creating container for instance ce80e22f-a0f3-40f8-8ccb-4953784a9a0f
   Cell 7835f3b0-4951-4d38-a70f-1cf3d60fd9ab successfully created container for instance ce80e22f-a0f3-40f8-8ccb-4953784a9a0f
   Downloading app package...
   Downloaded app package (34.4M)
   -----> Java Buildpack v4.16.1 (offline) | https://github.com/cloudfoundry/java-buildpack.git#41b8ff8
   -----> Downloading Jvmkill Agent 1.16.0_RELEASE from https://java-buildpack.cloudfoundry.org/jvmkill/bionic/x86_64/jvmkill-1.16.0_RELEASE.so (found in cache)
   -----> Downloading Open Jdk JRE 1.8.0_192 from https://java-buildpack.cloudfoundry.org/openjdk/bionic/x86_64/openjdk-1.8.0_192.tar.gz (found in cache)
          Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.1s)
          JVM DNS caching disabled in lieu of BOSH DNS caching
   -----> Downloading Open JDK Like Memory Calculator 3.13.0_RELEASE from https://java-buildpack.cloudfoundry.org/memory-calculator/bionic/x86_64/memory-calculator-3.13.0_RELEASE.tar.gz (found in cache)
          Loaded Classes: 17720, Threads: 250
   -----> Downloading Client Certificate Mapper 1.8.0_RELEASE from https://java-buildpack.cloudfoundry.org/client-certificate-mapper/client-certificate-mapper-1.8.0_RELEASE.jar (found in cache)
   -----> Downloading Container Security Provider 1.16.0_RELEASE from https://java-buildpack.cloudfoundry.org/container-security-provider/container-security-provider-1.16.0_RELEASE.jar (found in cache)
   -----> Downloading Spring Auto Reconfiguration 2.5.0_RELEASE from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-2.5.0_RELEASE.jar (found in cache)
   Exit status 0
   Uploading droplet, build artifacts cache...
   Uploading droplet...
   Uploading build artifacts cache...
   Uploaded build artifacts cache (128B)
   Uploaded droplet (81.6M)
   Uploading complete
   Cell 7835f3b0-4951-4d38-a70f-1cf3d60fd9ab stopping instance ce80e22f-a0f3-40f8-8ccb-4953784a9a0f
   Cell 7835f3b0-4951-4d38-a70f-1cf3d60fd9ab destroying container for instance ce80e22f-a0f3-40f8-8ccb-4953784a9a0f

アプリが開始するのを待機しています...
   Cell 7835f3b0-4951-4d38-a70f-1cf3d60fd9ab successfully destroyed container for instance ce80e22f-a0f3-40f8-8ccb-4953784a9a0f

名前:                   attendees
要求された状態:         started
経路:                   attendees.apps.pas.bosh.tokyo
最終アップロード日時:   Thu 14 Mar 20:40:16 JST 2019
スタック:               cflinuxfs3
ビルドパック:           client-certificate-mapper=1.8.0_RELEASE container-security-provider=1.16.0_RELEASE java-buildpack=v4.16.1-offline-https://github.com/cloudfoundry/java-buildpack.git#41b8ff8 java-main java-opts java-security
                        jvmkill-agent=1.16.0_RELEASE open-jd...

タイプ:           web
インスタンス:     1/1
メモリー使用量:   768M
開始コマンド:     JAVA_OPTS="-agentpath:$PWD/.java-buildpack/open_jdk_jre/bin/jvmkill-1.16.0_RELEASE=printHeapHistogram=1 -Djava.io.tmpdir=$TMPDIR -Djava.ext.dirs=$PWD/.java-buildpack/container_security_provider:$PWD/.java-buildpack/open_jdk_jre/lib/ext
                  -Djava.security.properties=$PWD/.java-buildpack/java_security/java.security $JAVA_OPTS" && CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.13.0_RELEASE -totMemory=$MEMORY_LIMIT
                  -loadedClasses=18499 -poolType=metaspace -stackThreads=250 -vmOptions="$JAVA_OPTS") && echo JVM Memory Configuration: $CALCULATED_MEMORY && JAVA_OPTS="$JAVA_OPTS $CALCULATED_MEMORY" && MALLOC_ARENA_MAX=2 SERVER_PORT=$PORT eval exec
                  $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher
     状態   開始日時               cpu    メモリー             ディスク           詳細
#0   実行   2019-03-14T11:40:38Z   0.0%   768M の中の 188.1M   1G の中の 164.8M   

ブラウザでhttps://attendees.$(terraform output apps_domain)にアクセスしてください。

image

cf sshでコンテナ内にsshでアクセスできます。

cf ssh attendees

続きは https://github.com/Pivotal-Field-Engineering/pcf-ers-demo/tree/master/Labs へ。

OpsManagerへのSSH

OpsManagerのVMにSSHでログインするためのスクリプトを作成します。

export OM_TARGET=$(terraform output ops_manager_dns)
OPS_MANAGER_SSH_PRIVATE_KEY=$(terraform output ops_manager_ssh_private_key)

cat <<EOF > ssh-opsman.sh
#!/bin/bash
cat << KEY > opsman.pem
${OPS_MANAGER_SSH_PRIVATE_KEY}
KEY
chmod 600 opsman.pem
ssh -i opsman.pem -o "StrictHostKeyChecking=no" -l ubuntu ${OM_TARGET}
EOF

chmod +x ssh-opsman.sh

ssh-opsman.shを実行して、OpsManager VMにログインしてみてください。

./ssh-opsman.sh
出力結果
Unauthorized use is strictly prohibited. All access and activity
is subject to logging and monitoring.
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.15.0-50-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
Last login: Tue May 21 09:07:40 UTC 2019 from 52.187.175.241 on pts/0
Last login: Tue May 21 10:58:33 2019 from 52.230.80.183
ubuntu@pas-ops-manager:~$ 

ログアウトしてください。

BOSH CLIの設定

OpsManagerのVM内でbosh CLIを使うための設定を行います。

image

image

この

  • BOSH_ENVIRONMENT
  • BOSH_CLIENT
  • BOSH_CLIENT_SECRET
  • BOSH_CA_CERT

の4つの環境変数を設定することでbosh CLIでデプロイされたBOSH Diretorにアクセスすることができます。

あえてCLIで設定します。

export OM_TARGET=$(terraform output ops_manager_dns)
OPS_MANAGER_SSH_PRIVATE_KEY=$(terraform output ops_manager_ssh_private_key)

cat <<EOF > opsman.pem
${OPS_MANAGER_SSH_PRIVATE_KEY}
EOF
chmod 600 opsman.pem
BOSH_CLI=$(om curl -s -p "/api/v0/deployed/director/credentials/bosh_commandline_credentials" | jq -r '.credential')

ssh -q -i opsman.pem \
  -o "StrictHostKeyChecking=no" \
  ubuntu@${OM_TARGET} "echo $BOSH_CLI | sed 's/ /\n/g' | sed 's/^/export /g' | sed '/bosh/d' | sudo tee /etc/profile.d/bosh.sh" > /dev/null

ssh-opsman.shを実行して、OpsManager VMにログインして、bosh envコマンドを実行してください。

bosh env
出力結果
Using environment '10.0.8.10' as client 'ops_manager'

Name      p-bosh
UUID      9e0e4dcc-5687-49b7-8bd4-5f73efb19609
Version   268.2.3 (00000000)
CPI       azure_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          local_dns: enabled
          power_dns: disabled
          snapshots: disabled
User      ops_manager

Succeeded

bosh vmsでBOSH Directorが管理しているVMの一覧を確認できます。

bosh vms
出力結果
Using environment '10.0.8.10' as client 'ops_manager'

Task 237. Done

Deployment 'cf-d85f2822c760722ac86a'

Instance                                                            Process State  AZ    IPs        VM CID                                                                 VM Type           Active
clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca                   running        null  10.0.0.17  agent_id:5d461492-dd7f-443d-bce2-5ee9df8d1567;resource_group_name:pas  Standard_DS1_v2   true
cloud_controller/eaf8eb7a-c545-4c2d-915f-1cb93fa58d56               running        null  10.0.0.15  agent_id:fb96faf6-6c69-485e-8e0b-0296ad0b6914;resource_group_name:pas  Standard_DS1_v2   true
cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815        running        null  10.0.0.18  agent_id:33556fea-f9ef-4c35-8f38-24836f49bad8;resource_group_name:pas  Standard_B1s      true
credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b                        running        null  10.0.0.25  agent_id:d88ca88a-d75f-4fd5-b9ad-fe9e636201c8;resource_group_name:pas  Standard_DS2_v2   true
diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec                    running        null  10.0.0.19  agent_id:db7cbbd1-bb4e-4ba0-a84a-33e13393e6cb;resource_group_name:pas  Standard_B1s      true
diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494                     running        null  10.0.0.20  agent_id:38ea257e-bfd3-42f7-a75f-d6ac35df9174;resource_group_name:pas  Standard_DS12_v2  true
diego_database/d4526128-29d2-47d2-8217-0a5f1100babd                 running        null  10.0.0.13  agent_id:f3654082-0f68-4617-9fed-edd186fdcfa6;resource_group_name:pas  Standard_B1s      true
doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9                        running        null  10.0.0.24  agent_id:020adacb-b4d9-449f-ac11-6c2d68136658;resource_group_name:pas  Standard_DS1_v2   true
loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d  running        null  10.0.0.21  agent_id:9ebd430e-4e4d-4084-bab2-60b74103f0b7;resource_group_name:pas  Standard_B1s      true
mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d                          running        null  10.0.0.12  agent_id:cdda408d-ddf6-42a4-aca3-c644cab7d06c;resource_group_name:pas  Standard_F2s      true
mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7                    running        null  10.0.0.11  agent_id:36ddedfb-d6c5-4fe5-a308-b7625f49090c;resource_group_name:pas  Standard_B1s      true
nats/cd096fc1-f705-4d1a-9819-ce1849489a99                           running        null  10.0.0.10  agent_id:15b9e26b-9ee9-4073-90d3-78b417e767d2;resource_group_name:pas  Standard_B1s      true
router/c015334d-d8a9-405d-abc2-8668bc6f9a2e                         running        null  10.0.0.16  agent_id:ae0337df-6fc5-4dd2-99a7-1744c0d674f2;resource_group_name:pas  Standard_B1s      true
syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700                 running        null  10.0.0.22  agent_id:54432291-39a5-4489-a982-25719306a184;resource_group_name:pas  Standard_B1s      true
syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c               running        null  10.0.0.23  agent_id:83764e15-02cc-4ee0-920d-b551d0c9a2a5;resource_group_name:pas  Standard_B1s      true
uaa/e30941bf-9a3f-422e-bd6a-ece8005eaa91                            running        null  10.0.0.14  agent_id:16b1df56-b650-4ad2-8136-96de676a3220;resource_group_name:pas  Standard_DS1_v2   true

16 vms

Succeeded

bosh instances --psコマンドで、プロセス一覧を確認できます。

bosh instances --ps
出力結果
Using environment '10.0.8.10' as client 'ops_manager'

Task 238. Done

Deployment 'cf-d85f2822c760722ac86a'

Instance                                                            Process                           Process State  AZ    IPs
clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca                   -                                 running        null  10.0.0.17
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   cc_deployment_updater             running        -     -
~                                                                   cloud_controller_clock            running        -     -
~                                                                   log-cache-scheduler               running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   metric_registrar_orchestrator     running        -     -
cloud_controller/eaf8eb7a-c545-4c2d-915f-1cb93fa58d56               -                                 running        null  10.0.0.15
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   cloud_controller_ng               running        -     -
~                                                                   cloud_controller_worker_local_1   running        -     -
~                                                                   cloud_controller_worker_local_2   running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   nginx_cc                          running        -     -
~                                                                   route_registrar                   running        -     -
~                                                                   routing-api                       running        -     -
~                                                                   statsd_injector                   running        -     -
cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815        -                                 running        null  10.0.0.18
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   cloud_controller_worker_1         running        -     -
~                                                                   loggregator_agent                 running        -     -
credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b                        -                                 running        null  10.0.0.25
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   credhub                           running        -     -
~                                                                   loggregator_agent                 running        -     -
diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec                    -                                 running        null  10.0.0.19
~                                                                   auctioneer                        running        -     -
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   cc_uploader                       running        -     -
~                                                                   file_server                       running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   service-discovery-controller      running        -     -
~                                                                   ssh_proxy                         running        -     -
~                                                                   tps_watcher                       running        -     -
diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494                     -                                 running        null  10.0.0.20
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-adapter                  running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   garden                            running        -     -
~                                                                   iptables-logger                   running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   netmon                            running        -     -
~                                                                   nfsv3driver                       running        -     -
~                                                                   rep                               running        -     -
~                                                                   route_emitter                     running        -     -
~                                                                   silk-daemon                       running        -     -
~                                                                   vxlan-policy-agent                running        -     -
diego_database/d4526128-29d2-47d2-8217-0a5f1100babd                 -                                 running        null  10.0.0.13
~                                                                   bbs                               running        -     -
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   locket                            running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   policy-server                     running        -     -
~                                                                   policy-server-internal            running        -     -
~                                                                   route_registrar                   running        -     -
~                                                                   silk-controller                   running        -     -
doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9                        -                                 running        null  10.0.0.24
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   doppler                           running        -     -
~                                                                   log-cache                         running        -     -
~                                                                   log-cache-cf-auth-proxy           running        -     -
~                                                                   log-cache-expvar-forwarder        running        -     -
~                                                                   log-cache-gateway                 running        -     -
~                                                                   log-cache-nozzle                  running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   metric_registrar_endpoint_worker  running        -     -
~                                                                   metric_registrar_log_worker       running        -     -
~                                                                   route_registrar                   running        -     -
loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d  -                                 running        null  10.0.0.21
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   bosh-system-metrics-forwarder     running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   loggregator_trafficcontroller     running        -     -
~                                                                   reverse_log_proxy                 running        -     -
~                                                                   reverse_log_proxy_gateway         running        -     -
~                                                                   route_registrar                   running        -     -
mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d                          -                                 running        null  10.0.0.12
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   cluster-health-logger             running        -     -
~                                                                   galera-agent                      running        -     -
~                                                                   galera-init                       running        -     -
~                                                                   gra-log-purger                    running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   mysql-diag-agent                  running        -     -
~                                                                   mysql-metrics                     running        -     -
mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7                    -                                 running        null  10.0.0.11
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   proxy                             running        -     -
~                                                                   route_registrar                   running        -     -
nats/cd096fc1-f705-4d1a-9819-ce1849489a99                           -                                 running        null  10.0.0.10
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   nats                              running        -     -
router/c015334d-d8a9-405d-abc2-8668bc6f9a2e                         -                                 running        null  10.0.0.16
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   gorouter                          running        -     -
~                                                                   loggregator_agent                 running        -     -
syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700                 -                                 running        null  10.0.0.22
~                                                                   adapter                           running        -     -
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   loggregator_agent                 running        -     -
syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c               -                                 running        null  10.0.0.23
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   scheduler                         running        -     -
uaa/e30941bf-9a3f-422e-bd6a-ece8005eaa91                            -                                 running        null  10.0.0.14
~                                                                   bosh-dns                          running        -     -
~                                                                   bosh-dns-healthcheck              running        -     -
~                                                                   bosh-dns-resolvconf               running        -     -
~                                                                   loggregator_agent                 running        -     -
~                                                                   route_registrar                   running        -     -
~                                                                   statsd_injector                   running        -     -
~                                                                   uaa                               running        -     -

16 instances

Succeeded

PASの一時停止

bosh -d <deployment_name> stopdeployment_name(PASの場合はcf-*********)以下の全てのプロセスを停止します。この場合はVMは起動したままです。 --hardオプションをつけるとVMは削除されます。ただし永続ディスクは残るため、復元可能です。

MySQLのインスタンス数を3にしている場合は、bosh stop前に必ず1に戻してください。

bosh -d cf-*************** stop --hard -n
出力結果
Using environment '10.0.8.10' as client 'ops_manager'

Using deployment 'cf-d85f2822c760722ac86a'

Task 240

Task 240 | 11:01:12 | Preparing deployment: Preparing deployment (00:00:04)
Task 240 | 11:01:50 | Preparing package compilation: Finding packages to compile (00:00:01)
Task 240 | 11:01:51 | Updating instance nats: nats/cd096fc1-f705-4d1a-9819-ce1849489a99 (0) (canary) (00:01:40)
Task 240 | 11:03:31 | Updating instance mysql_proxy: mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7 (0) (canary)
Task 240 | 11:03:31 | Updating instance mysql: mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d (0) (canary) (00:02:06)
Task 240 | 11:05:41 | Updating instance mysql_proxy: mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7 (0) (canary) (00:02:10)
Task 240 | 11:05:41 | Updating instance diego_database: diego_database/d4526128-29d2-47d2-8217-0a5f1100babd (0) (canary) (00:02:13)
Task 240 | 11:07:54 | Updating instance uaa: uaa/e30941bf-9a3f-422e-bd6a-ece8005eaa91 (0) (canary) (00:01:49)
Task 240 | 11:09:44 | Updating instance cloud_controller: cloud_controller/eaf8eb7a-c545-4c2d-915f-1cb93fa58d56 (0) (canary) (00:03:01)
Task 240 | 11:12:45 | Updating instance router: router/c015334d-d8a9-405d-abc2-8668bc6f9a2e (0) (canary) (00:01:27)
Task 240 | 11:14:12 | Updating instance syslog_adapter: syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700 (0) (canary)
Task 240 | 11:14:12 | Updating instance doppler: doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9 (0) (canary)
Task 240 | 11:14:12 | Updating instance diego_brain: diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec (0) (canary)
Task 240 | 11:14:12 | Updating instance cloud_controller_worker: cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815 (0) (canary)
Task 240 | 11:14:12 | Updating instance syslog_scheduler: syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c (0) (canary)
Task 240 | 11:14:12 | Updating instance credhub: credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b (0) (canary)
Task 240 | 11:14:12 | Updating instance diego_cell: diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494 (0) (canary)
Task 240 | 11:14:12 | Updating instance clock_global: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0) (canary)
Task 240 | 11:14:12 | Updating instance loggregator_trafficcontroller: loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d (0) (canary)
Task 240 | 11:16:02 | Updating instance cloud_controller_worker: cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815 (0) (canary) (00:01:50)
Task 240 | 11:16:03 | Updating instance credhub: credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b (0) (canary) (00:01:51)
Task 240 | 11:16:03 | Updating instance syslog_scheduler: syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c (0) (canary) (00:01:51)
Task 240 | 11:16:04 | Updating instance clock_global: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0) (canary) (00:01:52)
Task 240 | 11:16:08 | Updating instance diego_brain: diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec (0) (canary) (00:01:56)
Task 240 | 11:16:10 | Updating instance doppler: doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9 (0) (canary) (00:01:58)
Task 240 | 11:16:32 | Updating instance loggregator_trafficcontroller: loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d (0) (canary) (00:02:20)
Task 240 | 11:16:50 | Updating instance syslog_adapter: syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700 (0) (canary) (00:02:38)
Task 240 | 11:26:19 | Updating instance diego_cell: diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494 (0) (canary) (00:12:07)

Task 240 Started  Tue May 21 11:01:12 UTC 2019
Task 240 Finished Tue May 21 11:26:19 UTC 2019
Task 240 Duration 00:25:07
Task 240 done

Succeeded

EC2のコンソールを見るとPASのインスタンスが削除されていることを確認できます。

image

PASの再開

bosh -d cf-*************** start -n
出力結果
Using environment '10.0.8.10' as client 'ops_manager'

Using deployment 'cf-d85f2822c760722ac86a'

Task 258

Task 258 | 13:51:48 | Preparing deployment: Preparing deployment (00:00:04)
Task 258 | 13:52:24 | Preparing package compilation: Finding packages to compile (00:00:01)
Task 258 | 13:52:25 | Creating missing vms: nats/cd096fc1-f705-4d1a-9819-ce1849489a99 (0)
Task 258 | 13:52:25 | Creating missing vms: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0)
Task 258 | 13:52:25 | Creating missing vms: diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec (0)
Task 258 | 13:52:25 | Creating missing vms: cloud_controller/eaf8eb7a-c545-4c2d-915f-1cb93fa58d56 (0)
Task 258 | 13:52:25 | Creating missing vms: uaa/e30941bf-9a3f-422e-bd6a-ece8005eaa91 (0)
Task 258 | 13:52:25 | Creating missing vms: mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7 (0)
Task 258 | 13:52:25 | Creating missing vms: diego_database/d4526128-29d2-47d2-8217-0a5f1100babd (0)
Task 258 | 13:52:25 | Creating missing vms: cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815 (0)
Task 258 | 13:52:25 | Creating missing vms: mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d (0)
Task 258 | 13:52:25 | Creating missing vms: router/c015334d-d8a9-405d-abc2-8668bc6f9a2e (0)
Task 258 | 13:54:30 | Creating missing vms: diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec (0) (00:02:05)
Task 258 | 13:54:30 | Creating missing vms: diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494 (0)
Task 258 | 13:54:31 | Creating missing vms: mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7 (0) (00:02:06)
Task 258 | 13:54:31 | Creating missing vms: cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815 (0) (00:02:06)
Task 258 | 13:54:31 | Creating missing vms: loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d (0)
Task 258 | 13:54:32 | Creating missing vms: syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700 (0)
Task 258 | 13:54:33 | Creating missing vms: diego_database/d4526128-29d2-47d2-8217-0a5f1100babd (0) (00:02:08)
Task 258 | 13:54:33 | Creating missing vms: router/c015334d-d8a9-405d-abc2-8668bc6f9a2e (0) (00:02:08)
Task 258 | 13:54:33 | Creating missing vms: syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c (0)
Task 258 | 13:54:33 | Creating missing vms: doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9 (0)
Task 258 | 13:54:33 | Creating missing vms: nats/cd096fc1-f705-4d1a-9819-ce1849489a99 (0) (00:02:08)
Task 258 | 13:54:33 | Creating missing vms: credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b (0)
Task 258 | 13:54:37 | Creating missing vms: cloud_controller/eaf8eb7a-c545-4c2d-915f-1cb93fa58d56 (0) (00:02:12)
Task 258 | 13:54:41 | Creating missing vms: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0) (00:02:16)
Task 258 | 13:55:04 | Creating missing vms: uaa/e30941bf-9a3f-422e-bd6a-ece8005eaa91 (0) (00:02:39)
Task 258 | 13:56:03 | Creating missing vms: mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d (0) (00:03:38)
Task 258 | 13:56:26 | Creating missing vms: diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494 (0) (00:01:56)
Task 258 | 13:56:28 | Creating missing vms: credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b (0) (00:01:55)
Task 258 | 13:56:34 | Creating missing vms: doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9 (0) (00:02:01)
Task 258 | 13:56:34 | Creating missing vms: syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c (0) (00:02:01)
Task 258 | 13:57:03 | Creating missing vms: loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d (0) (00:02:32)
Task 258 | 13:57:03 | Creating missing vms: syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700 (0) (00:02:31)
Task 258 | 13:57:05 | Updating instance nats: nats/cd096fc1-f705-4d1a-9819-ce1849489a99 (0) (canary) (00:00:47)
Task 258 | 13:57:52 | Updating instance mysql_proxy: mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7 (0) (canary)
Task 258 | 13:57:52 | Updating instance mysql: mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d (0) (canary)
Task 258 | 13:58:39 | Updating instance mysql_proxy: mysql_proxy/5447095f-11d7-45fb-94c5-51b14c7365c7 (0) (canary) (00:00:47)
Task 258 | 13:59:26 | Updating instance mysql: mysql/cc3e73ed-e3f3-4ecb-b542-96ad58fa224d (0) (canary) (00:01:34)
Task 258 | 13:59:26 | Updating instance diego_database: diego_database/d4526128-29d2-47d2-8217-0a5f1100babd (0) (canary) (00:01:00)
Task 258 | 14:00:26 | Updating instance uaa: uaa/e30941bf-9a3f-422e-bd6a-ece8005eaa91 (0) (canary) (00:03:00)
Task 258 | 14:03:26 | Updating instance cloud_controller: cloud_controller/eaf8eb7a-c545-4c2d-915f-1cb93fa58d56 (0) (canary) (00:11:54)
Task 258 | 14:15:20 | Updating instance router: router/c015334d-d8a9-405d-abc2-8668bc6f9a2e (0) (canary) (00:01:50)
Task 258 | 14:17:10 | Updating instance syslog_adapter: syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700 (0) (canary)
Task 258 | 14:17:10 | Updating instance doppler: doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9 (0) (canary)
Task 258 | 14:17:10 | Updating instance cloud_controller_worker: cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815 (0) (canary)
Task 258 | 14:17:10 | Updating instance diego_cell: diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494 (0) (canary)
Task 258 | 14:17:10 | Updating instance clock_global: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0) (canary)
Task 258 | 14:17:10 | Updating instance loggregator_trafficcontroller: loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d (0) (canary)
Task 258 | 14:17:10 | Updating instance syslog_scheduler: syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c (0) (canary)
Task 258 | 14:17:10 | Updating instance diego_brain: diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec (0) (canary)
Task 258 | 14:17:10 | Updating instance credhub: credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b (0) (canary)
Task 258 | 14:18:32 | Updating instance syslog_scheduler: syslog_scheduler/1e9c3bcf-a783-4acf-a622-e8f8ef396f6c (0) (canary) (00:01:22)
Task 258 | 14:18:42 | Updating instance syslog_adapter: syslog_adapter/f4b8a3dc-bb93-4db9-8e91-b98818473700 (0) (canary) (00:01:32)
Task 258 | 14:18:50 | Updating instance loggregator_trafficcontroller: loggregator_trafficcontroller/c4d45693-6a16-4a32-9bd5-30f24401392d (0) (canary) (00:01:40)
Task 258 | 14:18:55 | Updating instance doppler: doppler/d90decdd-daba-49ea-a586-c1e51ac1dcf9 (0) (canary) (00:01:45)
Task 258 | 14:19:19 | Updating instance cloud_controller_worker: cloud_controller_worker/f907ea3d-a0fb-4660-8daf-4f4fd148a815 (0) (canary) (00:02:09)
Task 258 | 14:19:28 | Updating instance credhub: credhub/907dda34-af6f-4f59-9739-17c0bfe93e0b (0) (canary) (00:02:18)
Task 258 | 14:20:22 | Updating instance diego_brain: diego_brain/2fa4d99e-0850-402d-9516-2935caa290ec (0) (canary) (00:03:12)
Task 258 | 14:21:25 | Updating instance diego_cell: diego_cell/a928216d-f2ba-429b-82fa-0ac01c8d4494 (0) (canary) (00:04:15)
Task 258 | 14:24:39 | Updating instance clock_global: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0) (canary) (00:07:29)

Task 258 Started  Tue May 21 13:51:48 UTC 2019
Task 258 Finished Tue May 21 14:24:39 UTC 2019
Task 258 Duration 00:32:51
Task 258 done

Succeeded

⚠️ bosh stop --hardの後に、BOSH Director(bosh/0)をAzureコンソールから停止した場合(非推奨)は、Azureコンソールから起動した後、BOSH Directorへのアクセスがエラーになるため、bosh startの前に次の手順を実施してください。

  1. OpsManagerにログインして、BOSH DirectorにSSHでログイン。ログイン方法はこちらを参照。(bbr.pemom credentials -p p-bosh -c .director.bbr_ssh_credentials --format json | jq -r .private_key_pemで取得可能)
  2. sudo su -
  3. monit restart all
  4. monit summaryで全てがrunningになるまで待つ

動作確認のため、Smoke Testを実行しておくと安心です。

bosh -d cf-*************** run-errand smoke_tests
出力結果
Using environment '10.0.8.10' as client 'ops_manager'

Using deployment 'cf-d85f2822c760722ac86a'

Task 259

Task 259 | 14:44:43 | Preparing deployment: Preparing deployment (00:00:04)
Task 259 | 14:44:47 | Running errand: clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0) (00:02:06)
Task 259 | 14:46:53 | Fetching logs for clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca (0): Finding and packing log files (00:00:01)

Task 259 Started  Tue May 21 14:44:43 UTC 2019
Task 259 Finished Tue May 21 14:46:54 UTC 2019
Task 259 Duration 00:02:11
Task 259 done

Instance   clock_global/f3435c7f-e9c9-4a19-be08-c926f35438ca
Exit Code  0
Stdout     Running smoke tests...
           Running binaries smoke/isolation_segments/isolation_segments.test
           smoke/logging/logging.test
           smoke/runtime/runtime.test
           [1558449887] CF-Isolation-Segment-Smoke-Tests - 4/4 specs SSSS SUCCESS! 175.414589ms PASS
           [1558449887] CF-Logging-Smoke-Tests - 2/2 specs •S SUCCESS! 1m6.841318193s PASS
           [1558449887] CF-Runtime-Smoke-Tests - 2/2 specs •S SUCCESS! 56.830822763s PASS

           Ginkgo ran 3 suites in 2m4.327852785s
           Test Suite Passed

Stderr     -

1 errand(s)

Succeeded

BOSH DirectorおよびPASのアンインストール

om delete-installation 

Azure環境の削除

terraform destroy -force template/terraforming-pas

✒️️ Edit  ⏰ History  🗑 Delete