Force Local Users and Groups with Ansible

I’m in the process of migrating a few Puppet modules over to Ansible, and in the process I’ve run into an unusual situation while creating users and groups. Here is some background. I have an application that will refuse to complete its installation unless it can see certain users and groups in the local passwd and group files. It just so happens that these same users and groups are also contained in LDAP.

Puppet has an attribue called “forcelocal” in its user and group resource that has always been able to create a local user or group in this situation, despite having a matching user or group in LDAP. So, I was a bit dissappointed to discover that the similar “local” option in both the group and the user Ansible modules did not work in the same way. From the user module docs, the “local” option has the following behavior:

“Forces the use of “local” command alternatives on platforms that implement it. This is useful in environments that use centralized authentification when you want to manipulate the local users. I.E. it uses `luseradd` instead of `useradd`.This requires that these commands exist on the targeted host, otherwise it will be a fatal error. “ (

After reading that, I expected Ansible to create local users and groups regardless of whether or not that user or group was already found in LDAP. However, that is not the case. For whatever reason specifying the local option does not create a local user or group, if that user or group is already in LDAP, and is visible to your target server. Instead, Ansible will simply mark the task as complete and happily move on to the next step. Looking at the code for the module it’s using “grp” from the standard library, so it will just check the user database for the user or group, since it finds the user (albeit in LDAP) it moves on, which for my use case kinda defeats the whole purpose of the local option. I would like to see this module do a further check to see if the specified user name or group was listed in the /etc/passwd or /etc/group files, before marking success.

After a bit of head scratching, and cursing I read a few blogs, and stack exchange solutions from others who had attempted to solve this, but none of them struck me as viable for my situation. Because I’m not enough of a programmer to fix this little bug, and since I only need to run this particular playbook once, at the time a server is deployed, I chose a bit of a compromise solution.

At first, I kicked around the idea of inserting the user and group directly into the passwd, shadow, and group files, but that just didn’t seem like a clean solution to me. Plus, I assume at some point this problem will be fixed, so it seems easier to continue using the group and user modules than to rewrite the playbook at some point in the near future. So I decided to do the following: stop the sssd service (thus making the LDAP users and groups invisible to the server), add the users and groups using the Ansible module, and then restart sssd.

Here is a slimmed down version of what I ended up doing in the playbook. Keep in mind, it will only work if you are using sssd you should also make sure that your server is in a state where sssd can be stopped while these tasks are processed. In my case it’s fine because I only run this particular sequence once when the server is built.

- name: stop sssd
    name: sssd
    state: stopped
- name: add group
    name: localgroup
    gid: 1234
    state: present
- name: add user
    name: localuser
    uid: 1234
    group: localgroup
    state: present
- name: start sssd
    name: sssd
    state: started
    enabled: yes

Let me know if you’ve run into this, or if you have a better solution. I suspect you could make this a bit more universal by adding a test to see if the user and group have an entry in the passwd and group files before stopping sssd.

Convert a pem file into a rsa private key

When you build a server in AWS one of the last steps is to either acknowledge that you have access to an existing pem file, or to create a new one to use when authenticating to your ec2 server.

If you want to convert that file into an rsa key that you can use in an ssh config file, you can use this handy dandy openssl command string.

openssl rsa -in somefile.pem -out id_rsa

Note: you don’t have to call the output file id_rsa, you will want to make sure that you don’t overwrite an existing id_rsa file.

Copy the id_rsa file to your .ssh directory and make sure to change permissions on the id_rsa key to read only for just your user.

chmod 400 ~/.ssh/id_rsa

How to get started using Ansible

Install Ansible

On most Linux distributions Ansible can be installed directly through your distribution’s package manager. For those using MacOS or a distribution that doesn’t package Ansible, you can install it via python pip. The Ansible docs have a really good walkthrough for installation that can be found here:

I won’t repeat those instructions except to say that you will want to make sure that the computer you install Ansible on should have Python 2.6 or higher or Python 3.5 or higher and while Windows can be managed by Ansible it cannot be a control machine. Once Ansible has been installed it is, for the most part, ready to be used. There are no daemons to start, or services to enable. Ansible will run when you call it, either directly from the command shell, or via scheduled tasks.

Configuration files:

There are a few files that you should be aware of when you get started using Ansible.

The Ansible configuration file:

Located at /etc/ansible/ansible.cfg this file contains all of the global Ansible settings, you can specify the user that Ansible will run as, the number of parallel processes (forks) that it will spin up, and many other configuration items to help you fine-tune your Ansible installation. The settings in this file can be overwritten by creating an ansible.cfg file located in your users home directory ~/.ansible.cfg or in the directory that contains your playbooks <current_dir>/ansible.cfg. For common options and a more in-depth explanation of these files take a look at the documentation.

The Ansible inventory file:

The Ansible inventory file is located at /etc/ansible/hosts. You can change the default location of this file by updating the ansible configuration file to point to a different path, or you can specify an inventory file with “-i” when calling an ansible playbook. The inventory file is a flat text file that lists the hosts you want to manage using Ansible. Any host that you want to run a playbook on must first have an entry in the inventory file. When you installed Ansible a template file should’ve been created for you at /etc/ansible/hosts. The structure of the inventory file is pretty simple and fairly easy to learn, though as you learn more about Ansible you will find that there are a lot of modifications that can be made to this file that will make your playbooks even more powerful. The basic setup of an Ansible inventory file would have a Header Section noted with brackets to indicate a group of servers: for example a group that contains all servers might look like this:


A group that contains only database servers would look like this:


The dot’s, in this case, indicate that the list could go on and on, don’t put them in your inventory file. You can have any name you want for your groups, just make sure that they make sense to you. You can also have groups of groups, and variables that apply to entire groups. Here is a link to the documentation that you will need to create well-defined inventory files:

Don’t skip that section read it carefully and take time to define your groups well.

Getting Started with Ansible

Once you have Ansible installed, and have built at least a simple inventory file. Try making a test group with a couple hosts in it. Append the following to the bottom of your /etc/ansible/hosts file.


Ad-hoc commands

The first thing you should do if you have never used Ansible is to try out some of the ad-hoc commands. These commands can run Ansible modules against a host or group of hosts without writing a playbook.

One of the most useful ad-hoc commands is also the simplest one. Ansible has a module built into it called “ping”. The ping module will connect with a target system and attempt to locate a compatible python installation. It will then report back with success or failure. I use this module quite a bit to ensure that I have connectivity between my Ansible control server, and the client servers that I want to run playbooks against. This helps to minimize surprises when a playbook fails to run at the expected time.

To use the ping module run the following command:

ansible test -m ping

In this command string, we are doing a couple things.

First, we need to specify the group or host that we want Ansible to run against, in this case, it is the test group we just created. Second, we specify the module that we want to use, in this case, that is the ping module.

As long as your Ansible control server (could be a laptop at this point) can connect to the target hosts on port 22 then this command should complete successfully. Speaking of success how can you know if an ansible playbook or command ran successfully.

Ansible output is color coded

Green text: Indicates that Ansible ran successfully but made no changes the target host. This is a success in that, the host was already configured as defined in the playbook.

Yellow text: Indicates that Ansible ran successfully and made changes to the target host. This is a success in that, the host is now configured as defined.

Red text: Indicates a failure, either a connection failure, a syntax error, or a failure to run the appropriate tasks on the target host. If you see this you will want to take a moment to read the output. Ansible usually gives fairly detailed error messages.

Purple text: Indicates a warning. You will see this when you call a command rather than using an available module, i.e. Calling yum in a command rather than using the yum module. Generally Ansible will still run when you see purple, however, you should consider updating your playbook to use the built-in modules instead. You may also see purple text when a host is specified in a playbook but does not have a corresponding entry in the inventory file.

For more information about ad-hoc commands read the following documentation page:

Convert old shell scripts

Once you have spent some time playing around with ad-hoc commands and feel comfortable using them. The next thing I would do is to start evaluating all the bash scripts you have laying around to find out which ones would be a good fit for Ansible. If you have scripts that you run for: patching, firewall changes, configuration file changes, user creation/modification/deletion. Then you already have a lot low hanging fruit to pick from.

Once you start converting bash scripts to Ansible and you running them from a centralized server, you will begin to see the power that this tool affords you.

Common repeatable tasks

If you find yourself doing the same thing over and over again. Take a few hours every day, and instead of going through the motions see if you can break those tasks down into steps. After you have converted some shell scripts into Ansible it should be a little easier for you to identify which of your day to day tasks can be accomplished by an Ansible playbook. I know most Sysadmins are not straining to fill the hours at the office. We all have more work than we know what to do with, but I promise that spending a couple hours a week on automation will be well worth it in the long run and unless your boss is incompetent or some kind of control freak they should be ecstatic to see you working to improve processes. If not, find a new job…

Things you don’t like to do

This is why I love Ansible. Once you have converted common configuration scripts, identified and automated repetitive tasks, you can begin the noble work of getting Ansible to do the parts of your job that you don’t like doing. Updating a firewall on 200 servers? That sounds awful and I don’t want to do it… Spend one day on a playbook, and never worry about it again. Spending your nights and weekends patching? Nah, I’d rather sit on the couch and watch my cats chase a laser while stuffing my face with pizza and beer. Seriously, work is for suckers. Spend some time learning Ansible and it will pay you back in spades. You might even find yourself with enough time to work on a project you haven’t had time for. Or maybe take a vacation.

Ansible Example: Patch and reboot

Ah yes, patching, we have to do it. If you’re not regularly applying patches, you need to have a really good reason not to and a good mitigation strategy. Patching is something, that in large environments, can quickly consume a lot of time if it isn’t managed properly, and more importantly if it isn’t automated. In that regard, I have provided an example playbook that you can use to begin automating some of your vulnerability patching.

- hosts: <put some server names in here, with out the angle brackets>
  become: yes
  - name: Upgrade all installed packages
    name: '*'
    state: latest
  - name: Reboot after update
    command: /sbin/shutdown -r 1 "Reboot after patching"
    async: 0
    poll: 0
    ignore_errors: true
  - name: Wait for server to become available
      delay: 60
      timeout: 500 # This can vary use a timeout that is reasonable for your environment, most VM's will reboot before 500 seconds.

Breaking down the playbook

What does it do? This playbook, will patch, reboot, and wait for a server to become available. After you run it you will get a nice little color-coded summary of the play.


Every yaml file starts with “—” it’s like the #! at the start of a bash script.

What does “become” do? Using the keyword “become” means that upon executing this playbook Ansible will attempt to escalate privileges to root.

After that, you can list your tasks in blocks using modules. You can find an index of provided modules here:

Granted this is a pretty simple playbook, it only takes into account Red Hat based systems, and it doesn’t provide notifications upon completion, but I’ll leave that part up to you. If you spend a lot of time patching then this little snippet of code should get you a long way towards reducing the manual effort you put into server updates.

If this helped you feel free to share it or comment below.