Dealing with MySQL Replication
This will be a little different than previous posts but I figure I should put other things in here. I've been biking a whole lot recently including on the streets of buckhead at night in the rain, breaking some things on my mountain bike, and more of the same but today I'm talking about work.
So at work we have a e-mail filtering appliance. SpamAssassin does its thing and we store all of the bayesian filter data in a MySQL database. A master server is responsible for all of the learning, but there are also backup servers that filter email first to reduce load on the main server. To be the awesomest it can be, the backup servers all have MySQL installs that replicate off of the master MySQL database on the master server. This is great when it works but yesterday it was pointed out to me that another table in the database that does recipient verification wasn't updating properly. Turns out there were major major problems with the whole thing and I spent a good bit of today trying to figure things out. Here's what I learned:
- SpamAssassin uses composite keys in the token table for some reason. So in the bayes database there is an id that is the same for all tokens, and then the token itself. The key for the table is the composite of these two attributes and that's just weird since the id is pretty much always the same.. (Its the same in all ~135k rows in our database)
- Various terminals and character encodings when dealing with binary data in mysql are a total pain to deal with. The bayes token is stored as a little piece of binary, and when trying to update/delete/search by hand for specific tokens, using wildcards (%) at the beginning and the end, as well as for any sets of 8 bits that arent printable in ASCII is really the only way to go. Otherwise you'll get 0 rows returned and thats just not right.
- When importing data from a .sql file into MySQL and there is lots of it and it gives duplicate key errors, you have two options:
- search and replace ),( in any insert rows with )\nINSERT INTO bayes_token VALUES( . This lets you see which lines are duplicates and just modify that line
- the way I chose to do was to add ON DUPLICATE KEY UPDATE bayes_token="5" to the end of each INSERT statement. In my particular case, all ids were set to 4 and when there was a duplicate key, I set the ids to 5. then I could go in and SELECT * FROM bayes_token WHERE id="5" and fix any problems by hand. Odly enough I could do UPDATE bayes_token SET id="4" where id="5" and they were updated with no problem, even though this gave them a duplicate key. Even more odly, it doesnt seem that it actually was a duplicate. Before I updated, I updated I ran the following SQL (I checked and the num_spam number was unique to the one row I was looking at that supposedly a duplicate key) which should have given me both duplicate rows without worring about copying the binary data in a non-friendly terminal, but SELECT * FROM bayes_token WHERE token = (SELECT token FROM bayes_token WHERE num_spam = 34762) only returned the one row as a result. Weird.
- It is easiest to start over with MySQL replication. Do the database dump on the master, take note of the binlog position, import the .sql file into the slave, and start up replication on a new server id. Trying to "fix" errors is a bit of a pain and I ran into the following:
- "duplicate key exists" - I could go in and stop the slave, delete rows similiar to the token, start the slave again, and usually it would move on but due to not being able to correctly copy and paste the binary data to search with, sometimes 0 rows would be deleted and sometimes hundreds would be deleted. This probably had some effect on the quality of the filtering so I gave up on this approach.
- "Unable to parse replication log" - suck. I hate this error. Couldn't figure out a good way to fix this one and working with the crazy huge log file was just no fun.
So that was my fun at work today. Its time to get Gallery 1.5.1-RC2 out the door before meetings and apartment hunting this evening.