If you get a “Refusing to undefine while domain managed save image exists” message whilst trying to undefine an image using virsh, try passing the –managed-save flag like so:
$ -> virsh undefine service-a-3 --managed-save
Flag was added as part of this bug.
I recently ran into a situation where I needed to grant access to certain /home dirs in order to get puppetmaster started with SELinux enforcing. And I had to do it in such a way that I could keep the resulting “type enforcement” (.te) file in version control, this would allow me to track human readable changes.
$ -> cd ~ $ -> echo > /var/log/audit/audit.log # this ensures a clean log for analysis $ -> /etc/init.d/puppetmaster start # should fail $ -> audit2allow -i /var/log/audit/audit.log -m puppetmaster # this will output the perms necessary for puppetmaster to access needed resources, copy and paste this into the file you are using in version control $ -> checkmodule -M -m -o puppetmaster.mod /path/to/your/version/controlled/puppetmaster.te # this will create a .mod file $ -> semodule_package -m puppetmaster.mod -o puppetmaster.pp # this will create a compiled semodule $ -> semodule -i puppetmaster.pp # this will install the module
At this point, you have added a custom puppetmaster selinux module which will allow you to get through the first issue discovered when trying to start the service. From here there are one of two course of action, depending on whether the service starts or not. If the service starts, you are done. If the service does not start, you will need to repeat the above steps to determine which new permissions are required to allow the service to start, rinse and repeat until your service starts.
After moving melikedev.com to a vps, I was able to improve performance by configuring nginx to serve static assets while apache serves the content. In order to test if the NGinx config was correctly configured I used the ‘curl’ command to test response headers of known static assets. After testing and implementing the changes the site has never performed better.
When testing response headers I passed a few flags to the curl command; the ‘-I’ flag tells curl to only output the response headers, and the ‘-L’ flag tells curl which link you want the response headers for.
mpurcell@service1 -> curl -I -L http://melikedev.com/wp-content/plugins/sociable/js/sociable.js?ver=3.4.2 HTTP/1.1 200 OK Server: nginx/1.0.15 Date: Thu, 24 Jan 2013 21:20:44 GMT Content-Type: text/css Content-Length: 63287 Last-Modified: Sat, 29 Dec 2012 08:50:42 GMT Connection: keep-alive Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Pragma: public Cache-Control: public Accept-Ranges: bytes
Notice in the response headers the response code is 200, which is good. But notice the expires date, this far out date allows the browser to cache the static asset for a long time, which means when a return user refreshes the page, the static assets should be loaded from browser cache.
There was one more performance step I took to speed up the render time of the melikedev.com pages, and that was to compress static assets such as xml, html, css etc. In order to test whether the compression was I working I had to add an extra flag to the curl command line, the ‘-H’ flag. This flag allows you to pass custom headers to the request header, which can dictate the data contained within response headers.
mpurcell@service1 -> curl -I -H "Accept-Encoding: gzip, deflate" -L http://melikedev.com/wp-content/plugins/sociable/css/sociable.css?ver=3.4.2 HTTP/1.1 200 OK Server: nginx/1.0.15 Date: Thu, 24 Jan 2013 21:24:30 GMT Content-Type: text/css Last-Modified: Sat, 29 Dec 2012 08:50:41 GMT Connection: keep-alive Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Pragma: public Cache-Control: public Content-Encoding: gzip
Notice in this set of response headers the return code was 200 and the expires date was still the far out date, but notice the ‘Content-Encoding’ field, it responded with gzip. This means that because we told the server we are accepting gzip and deflate as encodings, the server compressed the static asset.
Taking steps to increase performance makes end users happy and increases hardware life cycles.
Currently I am working on a project to centralize all syslog entries into one server and have been running into issues due to the fact that I run SELinux and store the rsyslog.conf file in version control. By default you can tail the /var/log/audit/audit.log to see what’s going on, but the message is fairly encrypted and not easily understood. After some research there is an executable sealert which will parse the audit log and convert the messages into human readable format. sealert does not get installed by default on CentOS systems, so you have to do:
yum install setroubleshoot-server
After the install is completed, you can then analyze the audit log by issuing the following command:
sealert -a /var/log/audit/audit.log > /var/log/audit/audit_human_readable.log
It took a few seconds to analyze, but upon completion I was able to open the human readable version, and see entries like:
-------------------------------------------------------------------------------- SELinux is preventing /sbin/rsyslogd from search access on the directory /home. ***** Plugin catchall (100. confidence) suggests *************************** If you believe that rsyslogd should be allowed search access on the home directory by default. Then you should report this as a bug. You can generate a local policy module to allow this access. Do allow this access for now by executing: # grep rsyslogd /var/log/audit/audit.log | audit2allow -M mypol # semodule -i mypol.pp
So I was able to create the audit policy to allow rsyslogd the access it needed to the config file in order to run properly.
I run Percona MySQL on all my CentOS servers by way of the Percona MySQL RPM. So every time I run `yum update` Percona MySQL also gets updated. Unfortunately, when yum reports back with the packages that will be updated, and Percona MySQL is among the list, all I see is a version number. Rather than blindly accepting the package upgrade, I would also like to see the change log, so I had to do some searching but found the 5.1, and 5.5 release notes. Posting links so others can easily access the release notes every time a new release makes it way through a yum update.
I have been working with Magento and came across another hurdle. Magento requires the mycrypt PHP module to be compiled, otherwise you will not be able to complete the install process. So naturally I opened up a terminal and typed `yum install mcrypt` only to find that no such libraries existed. Apparently, the default repos don’t provide the mcrypt libraries any more, so I had to use the EPEL repo, which does provide access to the required mcrypt libraries.
The following steps outline how I successfully installed mcrypt libraries on my CentOS (6.x) system:
Localize EPEL Repo
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm
To verify that it was installed correctly, you can type `$ -> yum repolist`
Disable EPEL Repo
I don’t like not knowing what is installed on my system, as such I didn’t want to keep the EPEL repo enabled by default. Rather, I preferred to tell YUM to use EPEL only when I directed it to do so. In order to accomplish this, you need to make the following changes:
# /etc/yum.repos.d/epel.repo enabled=1 # to enabled=0
Now, the EPEL repo will not automatically be considered when you go to install a new package. Convenient for ensuring your system stays as “vanilla” as possible.
$ -> yum install libmcrypt libmcrypt-devel mcrypt --enablerepo=epel
The libmcrypt-devel libraries are only necessary if you are going to install the PHP mcrypt module.
The above command will install the mcrypt libraries as provided by the EPEL repo.
Now that we have mcrypt installed on our system, we can compile the PHP mcrypt module, first lets find out where mcrypt was installed:
$ -> which mcrypt /usr/bin/mcrypt
Now that we know where mcrypt is installed we can add the following flag to our PHP configure for compilation: –with-mcrypt=/usr/bin
After configure be sure to run make, and make install, after they are complete you should be able to `php -m` and see mcrypt as a compiled module.
Be careful when upgrading CentOS 5.x servers to 6.x with respect to GIT. On CentOS 5.x GIT was not available via default repo, so I compiled it manually and it was on version 1.7.6, but when I installed CentOS 6 and did a yum install for GIT, it was at version 1.7.1.
Glad that CentOS finally provided GIT via the default repo, I let yum handle the install, but when I went to start pushing code to my production box (with git 1.7.6) I kept getting GIT failures. Only after I upgraded my production box to CentOS6 and yum installed GIT, did my code push script work again.
Had a weird issue where my demo server was throwing 500 error when a request was made. I spent time digging into my nginx configs to see if there were a issue, once I was able to determine it was not nginx, I started tearing apart my apache vhosts to see what the issue was. It was tough to track down because neither nginx nor apache error logs were logging anything out of the ordinary.
I came across a serverfault.com post where someone suggested using the following command after making a request:
find /var/log/ -mmin -1
This command will return any files whose modtimes are less than a minute old. When I issued the command I noticed that my PHP error log was listed, so I tailed the error log and issued another request. Sure enough, an exception was being thrown because my bootstrap file for my core library could not make a needed database connection.
Now that I was aware that an exception was causing my apache server to issue a 500 response, I still couldn’t figure out why it wasn’t outputting the exception message. I started digging around my php.ini file and found that I had set `display_errors=off`. With display_errors set to off, apache will issue a 500 response rather than output the exception, which is a good thing b/c the exception message had some database connection information.
So if you are setting up an apache/php server and it’s throwing a 500 response, check the php error log too, you may have the same setup.
Also, I read that this type of behavior will occur if the php.ini file could not be read.
If you are using GIT as your version control and you attempt to do a `git pull` and get a “fatal: Where do you want to fetch from today?” message, you need to do either of the following:
# Specify the remote repo mkdir repo cd repo && git init git pull firstname.lastname@example.org:user/repo.git # Or # Clone the repo mkdir repo cd repo && git clone git pull email@example.com:user/repo.git . git pull
Linux – CentOS6 – httpd – mod_file_cache – mod_mem_cache – mod_imagemap – Cannot Load into Server – Cannot Open Shared Object0
If you are upgrading from CentOS5 to CentOS6 and attempt to start httpd, you may come across a message similar to: “Starting httpd: httpd: Syntax error on line 196 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_file_cache.so into server: /etc/httpd/modules/mod_file_cache.so: cannot open shared object file: No such file or directory”. Apparently the removal of some apache mods was an upstream decision which CentOS passed through to us. You can find more information here.
The quick fix is to remove these references from your httpd.conf file, however if you depend on these modules being available, you will have to download and compile manually.