Skip to main content
  1. Posts/

VM deployment & configuration with Puppet and vRealize Automation 7

·1110 words·6 mins·
Puppet vRealize Automation blogposts Puppet vRealize Automation blogposts

As you may know, vRealize Automation 7 brought us a great deal of improvements, one of which is the composite blueprint and the software components. With the new release of vRA 7.2 i’ve performed a full redeploy in my lab environment and - since we already have a full puppet stack running - decided to integrate the two.

Why puppet? #

Because, let’s be honest: vRealize automation still isn’t the way to perform advanced application configuration, manage configuration drift, and ensure versioning. vRealize Automation enables you to significantly simplify the deployment of both single virtual machines as well as full multitier application stacks, but where it still lacks is in actual software configuration and state change management. So why not have our cake and eat it too, by using vRealize Automation for the virtual machine and infrastructure deployment, and Puppet for the operating system and application deployment?

Installing the Puppet environment #

I’m not going to go through the entire setup as there are numerous blogs detailing this much better than i could ever do. Instead, i’ll give you a quick overview of my puppet environment.

Currently everything is running on the FOSS Puppet stack, but the examples used below should work equally well with Puppet Enterprise. In fact, because Puppet Enterprise provides you with the PuppetDB and its API and a preconfigured MCollective stack, using Puppet Enterprise would make some things even easier such as querying specific information directly from puppet through vRealize Orchestrator.

For our lab, we are running a single Puppet master with the following tools:

  • git
  • r10k
  • hiera

We have two environments in r10k: production and staging. While staging isn’t used too often as it’s only my home machines and lab, i prefer having it around as it enforces best practice.

The directory structure used for my puppet code repo is as follows:

`repo

  • data
    • os
    • roles
    • common.yaml
  • manifests
    • site.pp
  • site
    • profile
    • role
  • hiera.yaml `

hiera.yaml is defined as follows:

---
version: 4
datadir: data
hierarchy:
  - name: "Operating System"
    backend: yaml
    path: "os/%{::operatingsystem}"

  - name: "Roles"
    backend: yaml
    path: "roles/%{::role}"

  - name: "Nodes"
    backend: yaml
    path: "nodes/%{::trusted.certname}"

  - name: "common"
    backend: yaml

The site contains all roles and profile manifests. For example, this is my mediaserver manifest:

class role::mediaserver {
  include profile::transmission
  include profile::packages
  include profile::base_linux
  include profile::base
  include profile::cron
  include profile::nfs
  include profile::apache
}

And the data/roles/mediaserver.yaml contains the following:

---
classes:
  - 'role::mediaserver'

cron::job:
  'transmission_cleanup':
    command: '/opt/scripts/removecompletedtorrent.sh'
    description: 'cleans up old transmission files'
    minute: '0'
    hour: '3'
    weekday: '1'

nfs::mount:
  '/mnt/media':
    device: '172.22.100.1:/mnt/tank/uncompressed/media'
    fstype: 'nfs'
    ensure: 'mounted'
    options: 'auto,nofail,noatime,nolock,intr,tcp,actimeo=1800'
    remounts: true
    atboot: true
  '/mnt/media-staging':
    device: '172.22.100.1:/mnt/tank/uncompressed/media-staging'
    fstype: 'nfs'
    ensure: 'mounted'
    options: 'auto,nofail,noatime,nolock,intr,tcp,actimeo=1800'
    remounts: true
    atboot: true

transmission::rpc_username: 'transmission'
transmission::rpc_password: 'nothingtoseeheremovealong'
transmission::rpc_port: '9091'
transmission::peer_port: '51413'
transmission::blocklist_url: 'http://john.bitsurge.net/public/biglist.p2p.gz'
transmission::download_dir: '/mnt/media-staging/transmission'
transmission::incomplete_dir: '/mnt/media-staging/incomplete'
transmission::encryption: '1'
transmission::dht_enabled: true
transmission::pex_enabled: true
transmission::utp_enabled: false


apache::vhosts:
  'repo':
    port: '80'
    docroot: '/var/www/repo'
    servername: 'repo.int.vxsan.com'

If we look at the hiera.yaml above, this setup allows me to completely deploy and configure a server with this role by only setting the roles fact.

Now fortunately puppet includes the amazing tool facter which - in addition to getting OS generated facts - allows you to set custom facts. The only thing we need to do for that is provision a file containing these custom facts. In our case, as this is a static fact we’ve deployed a file called /etc/puppetlabs/facter/facts.d/role.yamlcontaining the following:


---
role: mediaserver

And that’s the only thing we need to do: from now on the fact “role” will be set to “mediaserver” for this server.

vRealize Automation setup #

Now onto the juicy bits: Deployment of this application through vRealize Automation.

We’ve created a basic single machine blueprint with some custom property definitions to consume in puppet:

Obviously you should be getting those environments and roles through external values as opposed to static values stored in vRA, but for the purpose of this demo static values will suffice.

The final request looks as follows:

The user can select his environment and the puppet role during request time. Now on to the actual blueprint.

To consume the Puppet environment and role, we’ll be using the vRealize Automation Software components. As these are only included in the Enterprise license you might not have these, but don’t worry: What we’re doing here can be done without the enterprise license, through either vRealize Orchestrator workflows by running programs in guest or through the vRealize Automation agent. While the implementation might be slightly different, the end result is the same.

We’ve created a single software component to install the Puppet agent, configure it and perform a first run. These are the properties we’ll be using:

Puppet_software_package is a hardcoded content property to download the agent from the puppetlabs repository and install it. The other two properties you’ve already seen in the description above.

Out software components will contain the following scripts:

###install

#!/bin/bash
dpkg -i $puppet_software_package
apt-get update
apt-get install puppet-agent

###Configure

#!/bin/bash

## Set some variables
conf='/etc/puppetlabs/puppet/puppet.conf'
role='/etc/puppetlabs/facter/facts.d/role.yaml'
## remove original puppet agent
rm -f $conf

## write a new puppet.conf
touch $conf
cat << EOT >> $conf
[main]
use_srv_records = true
srv_domain = int.vxsan.com
environment=$puppet_agent_environment
EOT

## set our role fact
cat << EOT >> $role
---
role: $puppet_facts_role
EOT

###Start

#!/bin/bash
systemctl start puppet
systemctl enable puppet

And our final blueprint in vRealize Automation will look as follows:

And that’s it. With the new software components, it really is that simple. No more uploading scripts to the guest through vRO, no more having to manually troubleshoots scripts or installing the software on your own. Instead, play on the strengths of puppet to deploy and configure your software, and the strengths of vRealize automation to deploy and configure your virtual machines and infrastructure.

Now that we’ve done the basic puppet deployment, consider what else you can do through this. Some examples of things that can be done:

  • Set the role fact as a static for each individual machine for composite blueprints with multiple machines to provide automatically deployed multitier applications.
  • Create day two operations to set facts on the puppet managed machines.
  • Determine facts such as environments, teams, datacenters, locations, etc. based on properties passed by vRealize Automation. Imagine deploying business-group or even requestor-specific configurations such as ssh keys, allowed users, etc.
  • When using PuppetDB, vRealize Orchestrator can be used instead of software components to classify nodes, set custom facts and perform registration.

Happy puppeting!

*PS: The attentive reader will have noticed that i haven’t signed the client certificate: i cheated a bit here and set up autosigning, which obviously a big no-no in production. Creating a vRealize Orchestrator worfklow to sign the certificate isn’t too complicated, but that’s something for next time. *