Whirled Corpse Craft

Whirled is just what it sounds like,
an online “world” har har – oh my side! Written in Flash, and thus
very accessible from all the major browsers and computer platforms.
Part casual gaming, part social networking, part creators pallet –
it is really quit unique and has an array of wildly different things
to do. I personally was highly entertained by a little game called
“Corpse Craft” whose game play is described as: “Build an army of
re-animated corpses to destroy your foes in this puzzle-action
hybrid.” Which is quite apt. The art is great, the story and
game play entertaining. You play a game of ‘implode’ or ‘collapse’
or whatever else you want to call it on the bottom of the screen.
As you destroy groups of similarly colored blocks you increase the
number of resource blocks of the same color. You spend those
resources to launch different types of re-animated corpses across the
screen. The different corpses have different abilities that you are
trying to use to destroy your opponents corpse re-animation factory
on the other side of the screen, who of course is sending re-animated
corpses to try and destroy you. It starts off quit easy and gets
much harder for the last few levels. Regardless quite fun and
doesn’t take ages to get through.

Marin Century Ride – 2008 106 miles on a bike – are you crazy?

So I did the Marin Century ride today, which was quite an event. The
longest ride I had done before was 45 miles up to the top of Mount
Tam, so this was a change! Mostly it is just a long time to be
riding a bike. There are several large hills on the route as you
can see on
the map
and there was a bit of a cross wind and head wind from time to time.
But, overall the weather was quite good. Sunny with a cool breeze
most of the day as we were near the coast. The early morning views
of valleys with clinging pieces of fog were classically Californian
coast. There were 4 rest stops on the route, all with lots of good
food and drink and restrooms. I had lunch part 1 at the second rest
stop and lunch part 2 at the third rest stop. After about 60 miles
my ass was starting to hurt – not surprisingly it continued to hurt
more for the rest of the ride. My left calf threatened cramp up but
never did. My right knee got a bit twingy after 80 miles and around
95 started to hurt if I put much force on it, so I mostly used my
left leg to get me up the last big hill then coasted the rest of the
way in. I thought it was interesting that I wasn’t really slowed
down by my cardio or my length strength but more due to the rest of
me wearing out. That is to be expected I think since I didn’t build
up slowly to a 100 miles. The book The Complete Book
of Long-Distance Cycling was pretty handy tool for figuring out
what I needed to do to train, but I only had 6 weeks and there is
only so much one can do. Oh and my right big toe had the outside
edge go numb, but it’s done that ever since that fateful backpacking
trip in ’98 so I was expecting it. I wore a lot of sunscreen but
got a little bit burned on the back of my calves as the sun shifted
around. I also had some king of bug fly into my shirt and get
caught which involved me then pulling over and frantically trying to
let said bug out as I could hear it buzzing around under my jersey.
Eventually I got it to fly out of my sleeve! The total route was
106 miles with 6250′ of elevation. From start to finish was about 9
hours for me. I was on my bike for 7 hours and 15 minutes, so
lunch, rest stops, and bug removal definitely took up some time. My
average speed was 14.3 MPH while I was on my bike. Definitely nice
that they had a good spread of food when I finished (including free
hagen-daz ice cream!) I certainly slept well that night. :)

Postgres Partitioning and hibernate Oh the humanity!

So I was trying to set up some data to go into a partitioned table in postgres and given our architecture relies on hibernate I thought it would be nice to be able to be consistent and use hibernate to push the data into the partition and read it out. I also wanted the partitioning table creation to be handled more or less automatically.
Setting up the partitioning in postgres was fairly easy. I created the master table FOO and an insert trigger on foo that calls a pl/pgsql function insert_foo. The FOO table is partitioned by date, so in insert_foo I take the date of the record to be inserted (NEW.datecreated) and use that to build up the name of the partition table I really want to insert it in: FOO_082008. I then use that table name an build a string that contains an insert command (carefully using quote_literal on the values to be safe) and EXECUTE that command. I catch the undefined_table exception which is thrown when the date rolls over to a new month for which we don’t yet have a table. In the exception handling code I dynamically create the table and rerun the original dynamic insert.
This actually all works quite well. The problem is hibernate, or more specificly hibernate helpfully trying to check for errors for you. Basically when you tell hibernate to save a Foo object it runs the insert on FOO. The trigger catches that and instead inserts the data into FOO_MMYYYY and returns null so no further processing by the database is done and the jdbc driver returns saying it inserted zero rows into the FOO table, which is technically true, and hibernate freaks out because it is expecting that 1 row should have been saved. That is reasonable enough, but annoyingly there is no way to tell hibernate you really expect zero rows. The exception that is thrown is a fairly generic HibernateException, so the only way to catch and swallow this one particular case would be to text match on the error message. We all know what a terrible idea that is, so we are a little SOL.
There are two things that seem like they would work with hibernate. One, is to use a postgres RULE instead of a pl/pgsql FUNCTION to do the partitioning. RULEs basically rewrites the sql you are going to run, so from the jdbc driver point of view you should get back that you did, in fact, save 1 row to FOO_MMYYYY. However I’ve never used rules and from what I can gather from my checking out the less that totally awesome documentation on the subject, it doesn’t seem like I can do the same level of magic table creation. You would have to maintain the rule so that each month you added a new if/then check to save the data to the appropriate table for the month and create the new table for the month. Even if you did that once a year and pre-created a years worth of tables it is still maintenance and someone could still screw it up. (Quite easily given my experience with DBAs ;) The other option is fairly hacky but does work. If insert_foo returns NEW instead of NULL then the insert operation continues just like before the trigger was activated and the jdbc driver reports 1 row saved and hibernate is happy. Of course the problem is that we now have one copy of the data in FOO and one in FOO_MMYYYY, that’s no good as all FOO_MMYYYY table inherit from FOO so all queries on FOO will return duplicate results. So to get around that you can make a table FOO_IN that is the same definition of FOO. In the hibernate mapping you map FooIn to FOO_IN and add a trigger on FOO_IN to call insert_foo. You modify insert_foo to return NEW and to delete from FOO_IN, this all results in a copy of the data going into FOO_MMYYYY and another going into FOO_IN, which is deleted the next time anything is inserted. Of course you can’t use hibernate to read from FOO_IN since there is nothing there. So you create another mapping for a class FOO_OUT that is the same as FOO_IN but maps to the table FOO. This is a little redundant but you only have to do it once. You can make FooIn and FooOut inherit from FooBase and use that in places the data could be read in or out.
If there were someway to do a DELETE in postgres that doesn’t cascade to the child tables you could get away with one mapping and insert_foo could return NEW and also delete from the FOO table. That is a little problematic as you would always have 1 duplicate row in the master table, but I can’t figure out how to actually do that, so it isn’t much of an issue.
Of course depending on what you are doing you can also use straight sql, but kind of annoying to do the whole mix and match with database. Anyway maybe there is a better way but I couldn’t work it out after a day of poking around.

Here is an example of using date to do the partitioning in a postgres pl function:

CREATE OR REPLACE FUNCTION dw_foo_insert_trigger()
RETURNS TRIGGER AS $$
DECLARE
dateTable TEXT;
cmd TEXT;
BEGIN
dateTable := ‘foo_’ || to_char(NEW.transaction_date, ‘MMYYYY’);

— you could also probably do NEW.* if you don’t care about column order changing.
cmd := ‘INSERT INTO ‘ || foo || ‘(id, code, transaction_date, override_id, product_code)’ ||
‘ VALUES (‘ || quote_literal(NEW.id) || ‘,’ ||
quote_literal(NEW.code) || ‘,’ ||
quote_literal(NEW.transaction_date) || ‘,’ ||
quote_literal(NEW.override_id) || ‘,';

— join strings together with null values results in a fail, so check for those explicitly
IF (NEW.product_code IS NULL) THEN
cmd := cmd || ‘null’ || ‘,';
ELSE
cmd := cmd || quote_literal(NEW.product_code) || ‘,';
END IF;

EXECUTE cmd;
RETURN NULL;

EXCEPTION
WHEN undefined_table THEN
DECLARE
createTable TEXT;
createIdxTUID TEXT;
BEGIN

— creat the new child table from the parent
createTable := ‘CREATE TABLE ‘ || dateTable || ‘( ‘ ||
‘CHECK ( transaction_date >= DATE ”’ || to_char(date_trunc(‘month’, NEW.transaction_date ), ‘YYYY-MM-DD’) ||
”’ AND transaction_date < DATE ”’ || to_char(date_trunc(‘month’, NEW.transaction_date + interval ‘1 month’), ‘YYYY-MM-DD’) || ”’)) INHERITS (foo)'; — create the index on the child table createIdxTUID := ‘CREATE INDEX IDX_’ || dateTable || ‘_ID ON ‘ || dateTable || ‘(id)'; EXECUTE createTable; EXECUTE createIdxTUID; — and rerun the command now that the child table exists EXECUTE cmd; RETURN NULL; END; END; $$ CREATE TRIGGER dw_foo_trigger BEFORE INSERT ON foo FOR EACH ROW EXECUTE PROCEDURE dw_foo_insert_trigger(); References: Hibernate In Action Is a pretty good reference for Hibernate and PostgreSQL Developers Library is one I want to get for postgres at some point in the future.