Open multiple files with vim

There are many instances when it’s useful to have multiple files open in vim, but if you aren’t familiar with this tool you can find yourself needlessly jumping around between multiple windows. If you are doing any type of real systems work on a Linux operating system I suggest that you familiarize yourself with vim. If you are not already using vim start by opening a command prompt and type vimtutor, once you’ve become familiar with how to navigate, search, and edit a document with vim this post will make more sense to you.

Use vim to see what is different between two files

There are several ways to find differences between two files on a Linux server or desktop. I like to use vim when I’m scanning a configuration file for recent changes from an earlier iteration ( assuming of course that there is a backup of the last known, good, configuration).

Comparing two files is a common task and there are several ways to view the differences between multiple files, but occasionally you may want to do this visually side by side.

Continue reading “Open multiple files with vim”

How to RTFM (Read the *#&@ Manual)

Finding help with Linux

If you hang out in enough Linux forums asking questions sooner or later someone will tell you to read the manual (presumably they think this will help you). Fortunately, over the last few years “rtfm” has ceased being the default answer to questions from new users. All things considered the Linux world has become more user friendly, even if the man pages haven’t.

One of the things that will allow you to separate yourself from new and even some intermediate users is knowing where to find the help you need on your own, knowing how to read the information you find, and then being able to apply that information. Plus, if you plan to take any Linux exams you will need to know how to find and read man pages. There is simply no way that most of us can memorize all of the command and configuration options you need to pass a Red Hat or Linux Foundation exam.

Continue reading “How to RTFM (Read the *#&@ Manual)”

Satellite 6 Duplicate Host Names with Puppet

Satellite 6, Red Hat’s patch, configuration, and deployment management one stop shop solution is a powerful tool. It is also a formidable and complicated piece of software. One of the big hurdles that I have run into when incorporating Puppet into Satellite 6 is that many of our systems do not use a fqdn (fully qualified domain name) for their host names. Which means that when I register “superawesomewebserver01” with Satellite 6 I get a host record that reflects the short name. This isn’t a problem until that same host connects with Puppet, its name is then recorded based on the certificate which is always the fqdn (i.e. “superawesomewebserver01.example.com” and results in duplicate host records showing up in Satellite, each as an independent object.

So, how can you fix this?

What you should not do… probably

Do not use  foreman-rake katello:unify_hosts on your Satellite server if you have connected it to a compute resource like VMware. Especially don’t do this if your Satellite user has full privileges to create, modify, and delete VM’s. Somewhere in the process of unifying the hosts, this script will delete the short name host record, which triggers Satellite to delete the host from VMware. Now in my case, I should note that people on our team had the foresight to not give Satellite the ability to delete virtual machines so I didn’t end up losing any critical data, or services. Instead, it ended up simply shutting down the target host, causing only a minor inconvenience for myself and the poor soul who happened to be on-call at the time.

If running this command is the best or only option you have, then I would suggest that you first disassociate all of your hosts from the compute resource that they are linked to. You can do this in the GUI from the “All Hosts” section. I’ve been trying to find a way to do this with hammer-cli but I haven’t seen anything that looks promising at the moment. Running the  foreman-rake katello:unify_hosts  command on a production Satellite 6 server, that has full permissions in your vm environment could be disastrous so be cautious…(I.E don’t run it just because someone from support asked you to) If you are not connected to a compute resource then this solution should work fine after registering all your hosts with Puppet.

A safer way to solve this problem

You can avoid the entire issue of duplicate host records by changing the name of each record (note I didn’t say the hostname of the actual machine) to the fqdn.

Foreman comes with a handy command line interface that will allow us to script this.

hammer host update --name <hostname> --new-name <hostname.example.com>

In my case, I gathered a list of the servers that I wanted to tack the domain name onto and put them into a file called “hostlist.txt” with each entry separated by a new line and used that list to iterate over a quick loop.

#!/bin/bash
hosts=$(cat shortnames.txt)
for h in $hosts
do
hammer host update --name $h --new-name $h.example.com
done

This will take a bit of time depending on how many records you are going to update, but it is far safer and quite a bit easier than many of the other solutions that I have found while digging through web forums.

Avoiding this problem in the first place

If you do use shortnames in your environment, one of the things you can do to avoid this problem in the first place is to use the fqdn when you initially register the host to Satellite. The subscription-manager package has an option to register a host with any name you choose.

sudo subscription-manager register --org='<organization>' --name <hostname.example.com> --activationkey='<key>'

I put this in an activation script and instead of hard coding a name I use –name $(hostname –fqdn) to make sure that I register each host with its proper fully qualified domain name.

This, I think, is the simplest way to avoid naming conflicts in the future. I’ve seen other suggestions about adding custom facts to new hosts to force subscription manager to use the fqdn, as outlined in this bugzilla report, and I’m sure that probably works just fine. I feel like this solution is a bit more flexible in that it allows you to use whichever name you want.

Command not found!

So you’re running through some instructions to configure software on your system, or troubleshoot some problem with a service and you see an error at the command line that says “command not found”. Here is how to locate the packages you need to install in order to use commands that are not available on your system.

CentOS/Red Hat – yum provides

Yum is an excellent package manager with lots of great built in functions. Using  yum provides <command> will output a list of packages that provide the command you are trying to run. Here is an example of the output.

sudo yum provides vgscan
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: repo1.sea.innoscale.net
 * epel: mirror.cogentco.com
 * extras: mirror.cisp.com
 * nux-dextop: mirror.li.nux.ro
 * updates: ftp.linux.ncsu.edu
7:lvm2-2.02.166-1.el7.x86_64 : Userland logical volume management tools
Repo        : base
Matched from:
Filename    : /usr/sbin/vgscan



7:lvm2-2.02.166-1.el7_3.1.x86_64 : Userland logical volume management tools
Repo        : updates
Matched from:
Filename    : /usr/sbin/vgscan

Another good thing about  yum provides is that it will also search for files. For example if you have a file on your system that you would like to match to a specific package or service  yum can get that information for you. For example you might not be sure which package installed the file /etc/sysconfig/authconfig  yum provides can get that information for you.

sudo yum provides /etc/sysconfig/authconfig
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: repo1.sea.innoscale.net
 * epel: mirror.cogentco.com
 * extras: mirror.cisp.com
 * nux-dextop: mirror.li.nux.ro
 * updates: ftp.linux.ncsu.edu
authconfig-6.2.8-14.el7.x86_64 : Command line tool for setting up authentication from network
                               : services
Repo        : base
Matched from:
Filename    : /etc/sysconfig/authconfig



authconfig-6.2.8-14.el7.x86_64 : Command line tool for setting up authentication from network
                               : services
Repo        : installed
Matched from:
Filename    : /etc/sysconfig/authconfig



authconfig-6.2.8-10.el7.x86_64 : Command line tool for setting up authentication from network
                               : services
Repo        : @base
Matched from:
Filename    : /etc/sysconfig/authconfig

 


Ubuntu

With Ubuntu 14.04 and up you don’t need to run a special command to find a program. For instance if you try to run the command  sar without having first installed sysstat you will see the following message:

luke@test-srv01:~$ sar
The program 'sar' can be found in the following packages:
 * sysstat
 * atsar
Try: sudo apt-get install <selected package>

It even tells you how to install the packages you need at the end of the message. Assuming you read the error messages you get when something doesn’t work…. Some of us may or may not be guilty of neglecting to pay attention to error messages.

OpenSUSE/Suse Enterprise Linux – cnf

Similar to Ubuntu running a command that doesn’t exist on your system will provide a suggestion to find the command you need.

luke@test-srv02:~> sar
If 'sar' is not a typo you can use command-not-found to lookup the package that contains it, like this:
    cnf sar

OpenSUSE suggests that we run another command (cnf) to find our package.

luke@test-srv02:~> cnf sar
                   
The program 'sar' can be found in the following package:
  * sysstat [ path: /usr/bin/sar, repository: zypp (SMT-http_smt-ec2_susecloud_net:SLES12-SP2-Pool) ]

Try installing with:
    sudo zypper install sysstat

Suse like Ubuntu gives us a suggestion to install sysstat and even provides the full command to get it. A simple copy and paste should be enough to get the package you want and get back to work.

What to do when df and du report different usage.

You may occasionally come across an issue where running df will produce output that disagree’s with the output of the du command. If you aren’t familiar with these two commands do see my post about filesystem and directory size. The reason for the difference in reported size is that df does not differentiate between files that are open in memory but have been deleted, or altered on the disk, whereas du will only see the files that are on the disk. You should recognize that these tools serve different functions and that you will need to rely on both of them to get a truly accurate portrayal of disk usage on your system.

Lets say you run  df -h to get an idea of how much space you have on each of the filesystems on your server or PC only to see that /var is 98% full, 9.8G out of 10G just to keep it simple. Like a good admin you run  du -h --max-depth=1 /var to find out which directories are the largest and may have files that need to be zipped up, moved, or deleted. The problem becomes apparent when  du returns that just 3G are in use on that filesystem. What do you do now?

Check for deleted files in memory.

Have you heard the old saying around the Unix world that “Everything is a file”. Well it’s true, everything in Unix, and by association Linux, is a file. This includes deleted files that now live as chunks of memory that are in use by a process.



You can view all open files on a system with the  lsof command, including deleted files that live in memory and are in use by a process (for example an old configuration file). For instance:

sudo lsof | grep root

will show you a full output of all the files currently in use by the root user. (Probably a lot of files). Running  sudo lsof | less will show you all of the open files on your system. It will look something like this. (I’m only grabbing the first 3 lines for brevity).

COMMAND     PID   TID             USER   FD      TYPE             DEVICE SIZE/OFF       NODE NAME
systemd       1                   root  cwd       DIR              202,1     4096          2 /
systemd       1                   root  rtd       DIR              202,1     4096          2 /
systemd       1                   root  txt       REG              202,1  1577232     396000 /lib/systemd/systemd

Here you can see the command, the process id (PID), which user has the file open, the file descriptor (FD), the size in bytes, and the location. In our scenario we want to find out if there are any large files open that may have been deleted. We can find those files like this:

sudo lsof | grep -i deleted

Keep an eye on the 8th column which if you recall is the SIZE column. Once you identify your large files check which user has the file open (4th column), usually this will be a service account like www-data, apache, mysql. Or pay attention to the command column to identify the process or service that is using the old file. After you identify the offending process all you need to do is restart the service using  systemctl, service, or kill -HUP

In conclusion

Don’t panic, take a breath, and assess what you are seeing, think about how your tools work and what they are showing you. Above all don’t just start deleting things to free up space! The reason that df and  du are having a disagreement here is that  df see’s these deleted files along with their replacements and calculates the total disk usage,  du on the other hand only see’s the new file. Now that you know how to find the zombie files you shouldn’t have too much trouble bringing these two system tools back into agreement.