Sometimes, you need to perform data transformation in a more complex way than SQL will allow (even with custom UDFs). Specifically, if you want to return a different number of columns, or a different number of rows for a given input row, then you need to perform what hive calls a transform. This is basically a custom streaming map task.
1. You are not writing an org.apache.hadoop.mapred.Mapper class! This is just a simple script that reads rows from stdin (columns separated by \t) and should write rows to stdout (again, columns separated by \t). It's probably worth mentioning this again but you shouldn't be thinking Key Value, you need to think about columns.
2. You can write your script in any language you want, but it needs to be available on all machines in the cluster. Any easy way to do this is to take advantage of the hadoop distributed cache support, and just use add file /path/to/script within hive. The script will then be distributed and can be run as just ./script (assuming it is executable), or 'perl script.pl' if it's perl, etc.
This is a simplified example, but recently I had a case where one of my columns contained a bunch of key/value pairs separated by commas:
I wanted to transform these records into a 2 column table of k/v:
I wrote a simple perl script to handle the map, created the 2 column output table, then ran the following:
-- add script to distributed cache
add file /tmp/split_kv.pl
-- run transform
insert overwrite table test_kv_split
as (k, v)
(select all_kvs as kvs from kv_input) d
As you can see, you can specify both the input and output columns as part of your transform statement.
And... that's all there is to it. Next time... a reducer?