Tuesday 25 August 2009

The Scariest Story I Ever Read/Spoiler Alert

For a while now, I've been trying to put together a story that starts: "The scariest story I ever read was 'Zuckerman Unbound' by Philip Roth". It's tough to do without making it quite obviously autobiographical (if I ever do put my Plan B into place, I might use it there, I suppose). That's because 'Zuckerman ...' isn't anything like a horror story. It's actually a tragicomedy about a Jewish writer who becomes famous for writing a novel full of sex and snide remarks about other Jews, and ends up being identified with the protagonist of his novel. Not something you'd normally consider scary, to be honest. Then again, I don't read scary stuff normally, so what do I know (come to think of it, I haven't even read any Stephen King).

What freaked me out were actually two big plot points, which seemed to reverberate with my own particular context at that time. [Note: I suppose this is the point where I go 'SPOILER ALERT!']. On the one hand, there was this side character in the story called Alvin Pepler, who is supposed to have been a big winner on the TV quiz shows that were prevalent in the 50's (before being convinced to throw a round by the producers in a similar plot to what was covered in the movie 'Quiz Show'). Roth portrays him as this gasbag living in his past, defined by what happened to him on TV but also trying to escape it (I hope you see where I'm going with this). I read the book in my first year of college, when my identity was still defined to an extent by the fact that I'd been on TV and won the BQC. It scared me then to think of the possibility that my one big life-defining moment might already be behind me at the age of 18.

The other big scary plot point, ['SPOILER ALERT 2!', if you will] was right at the end, when Zuckerman's father, who is dying, uses his last breath to abuse him for writing a book that basically brought shame to their respectable family and made fun of the community. This sounds almost maudlin the way I describe it here, and Roth obviously lays it all out better in the book, but it freaked me out even more. Back then I had pretensions to becoming a full-fledged writer at some point, and to have this whole potential future guilt-trip laid onto my sweet, family-comes-first Mallu Catholic soul was unexpected when I'd started reading the book. I knew I didn't have sufficient imagination to come up with an entire other world a la Tolkien, but I could see myself putting out a decent stream of snappy farces satirizing the world around me. The thought of someone, and that too someone close to me, taking it all personally hadn't occurred to me, until then. Honestly.

Of course, much water has passed under many bridges since the time I first read the book. For one, I'm now no longer known for being a good school quizzer and more for being an above average college one, among other things, so the ghost of Pepler doesn't haunt me so much. As I said, I've been meaning to write about the book for a while now, but lacking sufficient inspiration, I whacked it from my parents' home last time I visited and re-read the book on the flight back. I'm glad to say I found it a much more fun read this time, and not as scary. Then again, that could be because I'm growing old and giving in to convention anyway, so there's less likelihood of giving offence. Come to think of it, that's a scary thought too.

UPDATE: Added in links, and due attribution to Han. Also, if you're interested, here's a review of 'Zuckerman Unbound' from the NY Times. And finally, here's a list of 15 books that I like, which I put together because Han tagged me on Facebook.

Tuesday 4 August 2009

'Forever' Means 'I'm Willing to Play this Iterative Prisoner's Dilemma Game Indefinitely'

While killing time on Google Reader, I came across two posts, one on Marginal Revolution and the other on Overcoming Bias, on relationships. They got me thinking about this idea I had a while ago about how it might be fun to apply the game theoretic framework of the Prisoners' Dilemma to how a relationship might form and survive*. I finally got around to typing it up into a post (sub-scripts and super-scripts are killer), so here goes:

(Important Disclaimer: I'm not taking myself seriously in this post, and neither should you).

Consider an individual x with a utility function broadly as follows:

U(x) = Ux if x is single, and

U(x) = U'x + UxR if x is in a relationship

Where

U(x) is x's total utility (Yes, ok that's crappy notation, see disclaimer above)

Ux is the utility that x gets from generally getting on with the day-to-day aspects of his/her life,

U'x is the utility that x gets from generally getting on with the day-to-day aspects of his/her life when in a relationship, and UxR is the added utility that x gets from being in a relationship because the other person commits.

Assumption 1: Consider that committing to a relationship usually involves some sort of change in one's daily routine and possibly even more sacrifice, so we can assume that usually,

Ux > U'x

Although, one would presume that

U'x + UxR > Ux

(call this presumption 1, if you will; without this presumption, of course, further analysis would be meaningless)

Now consider another individual, y, with a similarly formed utility function V(y) with components Vy, V'y and VyR.

Assuming that x and y are of the right gender to suit their respective orientations and are open to getting into a relationship, a one-off encounter between them could be considered within the simple Prisoner's Dilemma framework as follows:


y

Commit

Defect

x

Commit

U'x + UxR , V'y + VyR

U'x , Vy + VyR

Defect

Ux + UxR , V'y

Ux , Vy


Here, since the bonus utility ( UxR or VyR) comes from having the other person commit to the relationship, if either party defects but the other commits, the defector gets the bonus but not the one who commits. Think of this in terms of the committing partner having to make sacrifices but not getting much of the rewards from being in the relationship. Obviously, then, as long as assumption 1 above holds, both x and y would defect in a one-off encounter, as in the standard single-iteration PD game, resulting in the two getting utilities of Ux and Vy, respectively. That's one way of explaining why it's very rarely that something like 'love at first sight' might happen (perhaps a relaxation in assumption 1 is required?).

The single-iteration PD game can be extended by considering iterative game play. Firstly, let us consider iterative game play with a defined number of iterations, say t. If I remember my introductory game theory classes, this does not arrive at a 'satisfactory' solution. While both players may consider committing since it means that they can get higher gains, since the number of iterations is fixed, it becomes rational to defect in the t-th iteration and aim for the highest possible gain. But if you know that your partner is going to defect at t, you could opt to defect at the (t-1)-th iteration itself, so you can try for the maximum gain in that iteration and avoid being duped in t. Since both parties would think this way, they will end up defecting from the first iteration itself. Not very romantic, but then again there are very few cases where you'll find people getting into a relationship with a clearly defined end-date. (There are examples, of course, but I'll leave you to find them and post them in the comments. I would guess, though, that in most of those cases assumption 1 would not hold).

On then to the next case: the infinitely-repeated PD game. Here, if both the players profess undying love and commit to each other, they get pay-offs of U'x + UxR and V'y + VyR in each iteration. They can also set credible threats for the other player, so that any defection by the other player could be met with some sort of punishment- a consequent defection in the next n iterations, say, or a 'grim' trigger strategy where any defection by one player will be met with the other player also defecting for all further iterations. These punishments ensure that the players are better off committing instead of defecting in any one iteration. And how do both players knows that the game is infinitely repeated? By repeatedly asserting the same, and/or locking in the commitment through a contract aka marriage. In such a framework, then, as long as neither player defects, both will maximize utility for all future iterations – or, as they say in the literature, they go on to live happily ever after.



Homework questions (Answer in the comments, if you please):

  1. What happens if one person thinks that the game is infinitely repeated, and the other knows it's going to be finite? Read the post on MR again. How would this analysis apply there?

  2. How does the analysis change if we relax assumption 1? What inferences would you make of a person for whom Ux <>x ?


*Incidentally, I was considering naming this post 'Prem Qaidi', but I wasn't sure how many people would get the joke...