It's time for me to leave Provo pretty soon. I'll probably end up in Seattle, although Portland, Madison and Denver are all possibilities as well.
I'm looking forward to actually starting a career-type job. One that pays better than the mostly $9.00/hr jobs I've had so far. While there are lots of good things about being a college student, not having much money isn't particularly one of them.
Will I miss Provo? Perhaps a bit. I don't think there's really anywhere else in the United States quite like it. And I have been here five years. But it's time to go.
So it goes.
Friday, June 20, 2008
Saturday, April 14, 2007
Bounded Rationality
"As you know, sir, in the heat of action, men are likely to forget where their best interests lie and that their emotions carry them away." (The Maltese Falcon (1941))
On Thursday, my game theory professor gave three reasons why people fail to behave rationally in the real world (the theory of "bounded rationality"): inability to calculate a rational course of action, inability to implement a rational course of action, and emotion overcoming a rational course of action.
Emotion definitely has the capability to overcome rational-seeming courses of action. And that's not always a bad thing (even though some part of me still wants to believe that it is). "Le cœur a ses raisons que la raison ne connaît point." I would suspect that in interacting with other people, emotion is a much larger barrier to acting rationally than inability to calculate or implement a rational choice.
So, I am currently taking a course of action that could definitely be perceived as irrational. Is it? Who knows. And I don't really care. Living life is a lot more fun than analyzing it.
On Thursday, my game theory professor gave three reasons why people fail to behave rationally in the real world (the theory of "bounded rationality"): inability to calculate a rational course of action, inability to implement a rational course of action, and emotion overcoming a rational course of action.
Emotion definitely has the capability to overcome rational-seeming courses of action. And that's not always a bad thing (even though some part of me still wants to believe that it is). "Le cœur a ses raisons que la raison ne connaît point." I would suspect that in interacting with other people, emotion is a much larger barrier to acting rationally than inability to calculate or implement a rational choice.
So, I am currently taking a course of action that could definitely be perceived as irrational. Is it? Who knows. And I don't really care. Living life is a lot more fun than analyzing it.
Wednesday, February 14, 2007
Game Theory and Relationships
The prisoner's dilemma is a well-known construct in game theory, describing a game in which each player has the choice to "cooperate" or "defect". In the classic construction, the players are suspects in some crime. The prosecutors offer each of them a deal: if one person betrays the other ("defects") and one stays silent ("cooperates"), the defector will go free while the cooperator gets ten years in jail. However, if both defect, each will get five years. If each stays silent, the prosecutors don't have enough evidence to convict, and each receives a sentence of one year on a minor charge.
Clearly, both players would be best off if each chose to cooperate (the Pareto efficient outcome). However, the best strategy is for each to defect. Why? Consider one player. If his accomplice cooperates, his best strategy is to defect, receiving zero years in jail instead of one. If his accomplice defects, his best strategy is again to defect, receiving five years instead of ten. Since both players face the same choices, the equilibrium result is that both players defect.
I suspect that a similar situation arises when two people are deciding whether to start a relationship or not. Consider the following situation: If both players choose "relationship", each receives a utility (or well-being or satisfaction) of 50. If both players choose "no relationship", each receives utility of 40. However, if one chooses "relationship" and one chooses "no relationship", the "relationship" player, broken-hearted, receives utility of -30, while the "no relationship" player, free to pursue a new relationship and glad the other person is finally out of the situation, receives utility of 60. This set-up leads to the same outcome; both players will choose "no relationship" even though they would both be better off by choosing "relationship". Thus no rational person should ever start a relationship. This seems to be a rather depressing outcome.
There is a possibility, however, in a prisoner's dilemma situation, that both players choosing to cooperate is an equilibrium outcome (that is, one from which neither player has an incentive to deviate). If the game is played repeatedly (an infinite or unknowable number of times), each player's choice in the current round will likely have an effect on the other player's choice in future rounds. That is, suppose one player violates the trust of the other by choosing to defect. While this player may obtain a better outcome this round, the other player is likely to "punish" him in the future by also defecting. This leads to an incentive for both players to cooperate every round, and this will work as long as both players continue to cooperate every time.
One of the best possible strategies for this "iterated" prisoner's dilemma is called "tit-for-tat". Essentially, the strategy is to cooperate in the first round, then in each future round, do whatever the other player did the round before. If both players choose tit-for-tat, each will cooperate forever. However, any deviation is likely to lead to an endless string of defections. Tit-for-tat was the "winner" in a tournament of strategies submitted by various academics, conducted in the 1980s. It turns out that a slightly better strategy is "tit-for-tat with forgiveness". This allows a small probability of choosing to cooperate even though the other player has defected, potentially breaking a long string of defections.
So what does this mean for relationships? It means that a "relationship" outcome is possible. Making the reasonable assumption that each person will re-evaluate their status in the relationship from time to time, a strategy of "always choose relationship" may be viable in the long run. The game certainly has the potential to be played an infinite or unknowable number of times, continuing through this life and beyond.
However, this strategy depends on one's ability to trust the other person to follow the same strategy. This trust can be difficult to establish, and once broken, can be very difficult to re-establish, potentially leading to the suboptimal outcome. Also, it's generally at the beginning of a relationship when each person has the most difficulty learning to trust the other, and potentially leading to an early loss of trust and permanent outcome of "no relationship". It may be that a rational person always chooses "no relationship" to avoid ever receiving the negative utility of a broken heart.
But clearly this is not what I'd like to believe (I don't think anyone would). Do people simply act irrationally when it comes to these things? I don't like that answer either. Perhaps it just takes a circumstance in which each person somehow decides to have perfect (or almost-perfect) trust in the other. I'm not sure. Perhaps someday I'll figure it out.
Clearly, both players would be best off if each chose to cooperate (the Pareto efficient outcome). However, the best strategy is for each to defect. Why? Consider one player. If his accomplice cooperates, his best strategy is to defect, receiving zero years in jail instead of one. If his accomplice defects, his best strategy is again to defect, receiving five years instead of ten. Since both players face the same choices, the equilibrium result is that both players defect.
I suspect that a similar situation arises when two people are deciding whether to start a relationship or not. Consider the following situation: If both players choose "relationship", each receives a utility (or well-being or satisfaction) of 50. If both players choose "no relationship", each receives utility of 40. However, if one chooses "relationship" and one chooses "no relationship", the "relationship" player, broken-hearted, receives utility of -30, while the "no relationship" player, free to pursue a new relationship and glad the other person is finally out of the situation, receives utility of 60. This set-up leads to the same outcome; both players will choose "no relationship" even though they would both be better off by choosing "relationship". Thus no rational person should ever start a relationship. This seems to be a rather depressing outcome.
There is a possibility, however, in a prisoner's dilemma situation, that both players choosing to cooperate is an equilibrium outcome (that is, one from which neither player has an incentive to deviate). If the game is played repeatedly (an infinite or unknowable number of times), each player's choice in the current round will likely have an effect on the other player's choice in future rounds. That is, suppose one player violates the trust of the other by choosing to defect. While this player may obtain a better outcome this round, the other player is likely to "punish" him in the future by also defecting. This leads to an incentive for both players to cooperate every round, and this will work as long as both players continue to cooperate every time.
One of the best possible strategies for this "iterated" prisoner's dilemma is called "tit-for-tat". Essentially, the strategy is to cooperate in the first round, then in each future round, do whatever the other player did the round before. If both players choose tit-for-tat, each will cooperate forever. However, any deviation is likely to lead to an endless string of defections. Tit-for-tat was the "winner" in a tournament of strategies submitted by various academics, conducted in the 1980s. It turns out that a slightly better strategy is "tit-for-tat with forgiveness". This allows a small probability of choosing to cooperate even though the other player has defected, potentially breaking a long string of defections.
So what does this mean for relationships? It means that a "relationship" outcome is possible. Making the reasonable assumption that each person will re-evaluate their status in the relationship from time to time, a strategy of "always choose relationship" may be viable in the long run. The game certainly has the potential to be played an infinite or unknowable number of times, continuing through this life and beyond.
However, this strategy depends on one's ability to trust the other person to follow the same strategy. This trust can be difficult to establish, and once broken, can be very difficult to re-establish, potentially leading to the suboptimal outcome. Also, it's generally at the beginning of a relationship when each person has the most difficulty learning to trust the other, and potentially leading to an early loss of trust and permanent outcome of "no relationship". It may be that a rational person always chooses "no relationship" to avoid ever receiving the negative utility of a broken heart.
But clearly this is not what I'd like to believe (I don't think anyone would). Do people simply act irrationally when it comes to these things? I don't like that answer either. Perhaps it just takes a circumstance in which each person somehow decides to have perfect (or almost-perfect) trust in the other. I'm not sure. Perhaps someday I'll figure it out.
Sunday, October 29, 2006
Economics of Voting
According to my microeconomics professor (Dr. Mark Showalter), voting in elections is not a rational action to take. In many places, you have to take time out of your day to drive to the polling place, cast your ballot, and drive home. But the effect of your individual vote is essentially zero, since it is highly unlikely that your vote will tip the election one way or the other. So, according to Dr. Showalter, no rational person ought to vote.
On the other hand, Dr. Showalter admits that he personally votes. Why? Mostly out of a vaguely-defined sense of "civic duty". Presumably, he gets some personal utility out of casting his ballot, and so he does so every year. His argument is that if you are going to vote, it should not be because you think you will make a difference by voting.
I recently mailed in my ballot for this year's midterm election. I'm registered to vote in Washington where almost everybody votes by mail (in fact, that is the only way to vote in 34 out of Washington's 39 counties). So, for me (presumably), the cost of voting is not nearly as significant -- I can sit here in my apartment and vote at my leisure. Is it more rational to vote like this? Perhaps, but since the candidates I voted for are heavily favored to win, I still don't have much hope of actually influencing the election. (In fact, there were about 15 unopposed races on the ballot. I skipped all of these.)
So is it a good idea to vote? It probably doesn't do too much harm. Plus, there's the argument that if everyone acted rationally and did not vote, then it would become a rational act for one person to vote and determine the entire election. So, voting must be rational on some level... but it probably is not in general.
On the other hand, Dr. Showalter admits that he personally votes. Why? Mostly out of a vaguely-defined sense of "civic duty". Presumably, he gets some personal utility out of casting his ballot, and so he does so every year. His argument is that if you are going to vote, it should not be because you think you will make a difference by voting.
I recently mailed in my ballot for this year's midterm election. I'm registered to vote in Washington where almost everybody votes by mail (in fact, that is the only way to vote in 34 out of Washington's 39 counties). So, for me (presumably), the cost of voting is not nearly as significant -- I can sit here in my apartment and vote at my leisure. Is it more rational to vote like this? Perhaps, but since the candidates I voted for are heavily favored to win, I still don't have much hope of actually influencing the election. (In fact, there were about 15 unopposed races on the ballot. I skipped all of these.)
So is it a good idea to vote? It probably doesn't do too much harm. Plus, there's the argument that if everyone acted rationally and did not vote, then it would become a rational act for one person to vote and determine the entire election. So, voting must be rational on some level... but it probably is not in general.
Friday, October 27, 2006
Free Will
Do humans have free will? I like to think about this in terms of computer programs, for whatever reason. Consider the following computer programs (assuming no errors in software or hardware):
1. A program that displays the message "Hello". This program cannot reasonably said to have free will. Its creator knows what will happen every time it is ever run.
2. A program that asks the user to input a string of text and displays whatever was input. In this case, the program has no free will. Its creator does not have any idea what the result will be each time the program is run. However, given a certain input, the program is guaranteed to produce a certain output.
3. A program that displays a (pseudo-)random number (say, between 0 and 1). This program cannot reasonably said to have free will either. Although its output is not easily predictable, it is still governed by the specific algorithm for generating random numbers. This algorithm, given the same inputs, will produce the same output every time. In a sense, the creator of the program "knows" what the result will be.
4. A program that accurately simulates a human brain. Now, of course, neither the hardware nor the understanding required for such a program currently exists. But let's suppose that it did. I contend that such a program is possible. Given a particular state of the electrons in a brain, there are specific rules (whether or not they are currently understood) that dictate where these electrons will flow. By "programming" such rules, one could potentially simulate a human brain.
Could this program be said to have free will? I don't know, but I lean toward saying no. It is simply operating under the instructions given to it; in a sense, it is nothing more than a complicated combination of the first three types of programs mentioned above. Given a certain input, it doesn't seem reasonable to conclude that the program can "decide" on its own which output to produce. If the program did have free will, which additional line of code gave it "free will"? Is there something specific that divides entities with "free will" from entities without it? Finally, in what way is this program different from a "real human"?
I don't know the answers to these questions, which is probably why I don't find free will to be convincing (although I'm willing to be convinced otherwise). But this leads to a rather fatalistic, depressing view of the universe. So I try not to think about it too often.
1. A program that displays the message "Hello". This program cannot reasonably said to have free will. Its creator knows what will happen every time it is ever run.
2. A program that asks the user to input a string of text and displays whatever was input. In this case, the program has no free will. Its creator does not have any idea what the result will be each time the program is run. However, given a certain input, the program is guaranteed to produce a certain output.
3. A program that displays a (pseudo-)random number (say, between 0 and 1). This program cannot reasonably said to have free will either. Although its output is not easily predictable, it is still governed by the specific algorithm for generating random numbers. This algorithm, given the same inputs, will produce the same output every time. In a sense, the creator of the program "knows" what the result will be.
4. A program that accurately simulates a human brain. Now, of course, neither the hardware nor the understanding required for such a program currently exists. But let's suppose that it did. I contend that such a program is possible. Given a particular state of the electrons in a brain, there are specific rules (whether or not they are currently understood) that dictate where these electrons will flow. By "programming" such rules, one could potentially simulate a human brain.
Could this program be said to have free will? I don't know, but I lean toward saying no. It is simply operating under the instructions given to it; in a sense, it is nothing more than a complicated combination of the first three types of programs mentioned above. Given a certain input, it doesn't seem reasonable to conclude that the program can "decide" on its own which output to produce. If the program did have free will, which additional line of code gave it "free will"? Is there something specific that divides entities with "free will" from entities without it? Finally, in what way is this program different from a "real human"?
I don't know the answers to these questions, which is probably why I don't find free will to be convincing (although I'm willing to be convinced otherwise). But this leads to a rather fatalistic, depressing view of the universe. So I try not to think about it too often.
Thursday, October 26, 2006
How To Fail
In seventh grade, I was in a required speech class at my junior high school. One of the speeches we had to give had to be a "how-to" speech, describing the best way to perform a particular task. Out of ideas, I decided to give my speech on "How to Fail Your Speech". My grade for the assignment? 100%.
So, I'm a grader for the math department here at BYU. Often, students will turn in papers that, for one reason or another, deserve a less-than-optimal score. Therefore, I present (based, unfortunately, in reality):
How Not to Do Well on Your Math Assignment
1. Don't write your name on your paper. This is completely unnecessary and may result in you actually receiving credit for the work you did.
2. Hand in a large quantity of work, preferably at least 10 pages stapled together. But, make sure that you do not do all the problems assigned and to do plenty of unassigned problems. Although the grader may be impressed by the sheer volume of work you produced, this is doubtful.
3. Alternatively, hand in a very small quantity of work. In fact, ideally, you would just write down the numbers of all the problems without actually doing any work related to the problems assigned.
4. Write in letters so small that no one can possibly tell what you have written, especially the grader.
5. Alternatively, organize your work in such a fashion that following it is very difficult. Working problems from right to left on the page, for example, works well. Also, it's best to give no indication of which part of your work is the actual answer.
6. Finally, write an amusing note to the grader about how horrible your life is and how you just can't possibly succeed. The grader will laugh mercilessly and enjoy giving you your deserved low score.
So, I'm a grader for the math department here at BYU. Often, students will turn in papers that, for one reason or another, deserve a less-than-optimal score. Therefore, I present (based, unfortunately, in reality):
How Not to Do Well on Your Math Assignment
1. Don't write your name on your paper. This is completely unnecessary and may result in you actually receiving credit for the work you did.
2. Hand in a large quantity of work, preferably at least 10 pages stapled together. But, make sure that you do not do all the problems assigned and to do plenty of unassigned problems. Although the grader may be impressed by the sheer volume of work you produced, this is doubtful.
3. Alternatively, hand in a very small quantity of work. In fact, ideally, you would just write down the numbers of all the problems without actually doing any work related to the problems assigned.
4. Write in letters so small that no one can possibly tell what you have written, especially the grader.
5. Alternatively, organize your work in such a fashion that following it is very difficult. Working problems from right to left on the page, for example, works well. Also, it's best to give no indication of which part of your work is the actual answer.
6. Finally, write an amusing note to the grader about how horrible your life is and how you just can't possibly succeed. The grader will laugh mercilessly and enjoy giving you your deserved low score.
Subscribe to:
Posts (Atom)