Tuesday, July 3, 2012

If It's Friday, Something Must Be Breaking

I had an appointment that required me to leave the office at 3:15 today. My kid was due for her one-year pediatrician appointment, complete with shots, and I was going to meet my wife at the train station so we could all go. We've gone to every ped appointment she's had in her year on this earth as a family save one minor visit. At 2:10, things started to go wrong in the office. So wrong.

 First there were reports that our in-house app was freezing. A quick check of some basic server vitals (memory, disk space, etc.) showed everything looked hunky dory. We've had issues in the past where Java leaked memory or there was something else programmatic causing the problem. Tomcat logs showed no appreciable delay in Apache handing off requests, and no errors in Tomcat itself. Since this was a project that affected only internal workers, and there were few of us left at this point, I opted to restart the Tomcat service. Tomcat stopped, and didn't start back up. Why? The file system had mysteriously gone into read-only mode.
This was not my first run-in with the dreaded read-only file system. We actually had a server in our data center mysteriously experience the same issue last week (I talk about this in another post that I have yet to finish). I checked dmesg and found lines to the effect that journaling had stopped and /dev/sda1 was being mounted read-only. Whoops. I had already learned from my previous run-in that touch /forcefsck was not going to work. Since the server was in-house I went straight to my Knoppix cd, booted off of it, and ran fsck on the root file system. It cleaned what it needed to and when I rebooted the file system was back to normal. I thought everything was well and good (save for an obvious need to try and find out if this was a hardware issue or just some bad blocks on the drive). I emailed everyone to let them know they could get back in to the program.

Two minutes later I get an email reporting that the program was still freezing. I tried it myself. I could log in, I could run searches, but if I tried to alter information in the program it froze up. Back on the server I checked out MySQL and found that when I logged in and tried to access the database our program uses it froze and wouldn't let me in. I checked the processes and found that there was something called "Repair with keycache" being run by the user deb-sys-maint, which is a maintenance user that Ubuntu creates with its version of MySQL. I had no idea what that was though, so I pinged one of the developers who explained that it meant that MySQL was doing a check of all the tables in that database because it had been improperly shutdown. He confirmed that it was likely what was causing the freezing for people. The tables were locked for writing while this check was taking place.

This was all news to me. I know enough about MySQL to be dangers (and to set up replication), but this was quickly escalating outside of my area of expertise. I was also curious about why MySQL hadn't been shut down properly, as I was able to reboot the server safely using shutdown -r now, which is supposed to run through and call all the appropriate scripts for the runlevel to shut down running processes. To the interwebs!

Looking into this is where I learned a few important things. Ubuntu uses Upstart for many services instead of the old-school SysV/init way. In case you're not familiar, init depends on a default runlevel entry in the /etc/inittab file. Depending on that runlevel a set of scripts is called and services are launched serially. There's apparently an issue with the way Upstart shuts down MySQL, especially for MyISAM tables, which we mainly use. Upstart sends a shutdown command to mysqld, and then waits for a specified time (this is a timeout value set in the upstart script) and if MySQL hasn't shut down yet it sends a SIGKILL. This is pretty much like flipping the power switch on a server, meaning if MySQL is taking too long to shut down (completing some process or transaction) it will kill MySQL right in the middle of that, guaranteeing corrupted tables. Later releases have "fixed" this by increasing the timeout to 300 seconds, and I confirmed that my mysql.conf Upstart script does.

Another bit of interest is that apparently /etc/init.d/mysql also calls a script called /etc/mysql/debian-start that automatically performs a check for crashed MyISAM tables. So, if there are crashed tables, the repair starts automatically instead of letting you do it manually.

Why did MySQL crash anyway? There were likely some MySQL processes hanging due to the file system ro situation, meaning any pending database write operations were not completed. That didn't occur to me at the time though, and I simply issued the standard shutdown command. Doh!

What happened after the reboot and fsck completed? MySQL called the command to check the tables, and MyISAM did indeed have some crashed tables. The tables were being repaired, and while being repaired were either locked or simply using so much I/O that the actual application was still unavailable. I checked the processlist in MySQL and found "repair with keycache" running. One of the developers indicated that this check was the reason the databases were still unavailable.

So what is repair with keycache? When I Googled for it it was immediately clear that whatever it was, it was undesirable. In fact, I saw more than one reference to the "dreaded repair with keycache". Yikes. So, to understand keycache, you have to understand indexing. Indexing is equated to the index of a book. Instead of flipping through a book looking for every instance of the term "tomato plant", you can look in the index for "tomato plant" which will point you to the specific pages. Similarly, in MySQL you create an index on a column that you use often for searching. For example, if you have a users table with a column called age on which you often query (select * from users where age = '30';), you could create an index on the age column that would then make querying faster. Instead of MySQL having to go through the entire users table, reviewing all the values to find the ones that match, it can use the index and pull up those values much more quickly. The downside is potential performance issues related to the fact that every time you update an indexed table, you have to update the index as well.

The key cache is where index files are held in memory. When MySQL does a repair, it can either repair using key cache, or it can repair using sorting. Sorting is faster but depends on some variables like myisam_max_sort_file_size, which dictates how much temporary space can be used to do the sort repair. If you don't actually have that amount of space, then MySQL will default to key cache sorting instead. Running mysql> show global variables like 'myisam_max_sort_file_size'; showed a value of 9223372036853727232  which if I did my math correctly, is something ridiculous like 8589934591GB, and is apparently the default in 64-bit installations. The tmpdir variable in my.cnf points to /tmp, which is on the root partition and had 15GB free so I don't think either of those were an obstacle. It therefore remains a mystery to me why the repair defaulted to key cache instead of by sorting. The calculation for determining the right value for myisam_max_sort_file_size seems to be number of rows * maximum key size. In other words, the size of all the indexes fits inside the size of my sort file, and there's space on my drive for the sort file. It boggles my mind that the amount of space I need could be more than I have (and more than that ridiculous default size).


Sources:
https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/620441
http://mysqlmaniac.com/2012/how-do-we-control-mysql-daemon-in-linux-part1/
http://www.databasejournal.com/features/mysql/article.php/1382791/Optimizing-MySQL-Queries-and-Indexes.htm
http://venublog.com/2010/01/04/performance-comparison-of-repair-by-sorting-or-by-keycache/

1 comment:

  1. Did you ever figure this out? Having the same issue, and wish I knew why!

    ReplyDelete