Jayachandran (2006) on the Jeffords Effect

Seema Jayachandra’s paper (here is what appears to be the final draft, later published in the Journal of Law and Economics) examines what she calls the “Jeffords Effect”: the change in the value of politically connected firms in conjunction with the unexpected change in Senate control that came with Vermont senator James Jeffords’ departure from the Republican Party. She documents that firms that donated to the Republican Party lost almost 1% of their value in the week after Senate control unexpectedly shifted, while firms that supported Democrats gained almost half of a percent.

As Jayachandra notes, her findings provide strong evidence that many firms’ value is strongly affected by which party is in power, and that firms on average contribute more money to the party that is more favorable to their interests. What is not clear from the association between political control and the market value of donor firms is whether the firms gave money in order to get their preferred party in power or, instead, whether they gave money to buy favors politicians in the recipient party. To me, the most intriguing thing about the paper (once the careful empirical work of showing the association between donations and abnormal returns is out of the way) is the back-of-the-envelope calculations Jayachandran uses to assess these alternative interpretations of corporate soft money contributions. I’ll relate that logic here.

As a starting point, she estimates that $1 in soft money contributions bought roughly $46 in market capitalization in the Jeffords episode. (The actual gain was more like $2300, but she reasonably cuts this down by 50 because the soft money was only part of a contribution pattern that occurred over a longer period of time and included other forms of contributions.) If the Jeffords event were known in advance, this would be some seriously easy money and firms should have given a lot more. But it was not known in advance; political contributions here as elsewhere were a risky investment with uncertain outcomes. So Jayachandran looks at how risky of an investment it was under different possible motivations for giving.

If firms gave money in order to help their favored party get elected (in other words, if the Jeffords event was significant because it handed power to the Democrats), the probability of being “successful” (ie providing the contribution that would tip the scales and get your party into office) was very low. Jayachandran uses .01 percent as a rough estimate of the probability of our contribution tipping the scales, but of course it must be much lower than this. Given that the benefit of party control is estimated at $46 per dollar contributed, the low probability of success makes this form of investment very unattractive indeed. If this were the motivation, it would be a puzzle why any firm contributes at all.

If instead firms gave money in order to curry favor with one or the other party in anticipation of getting legislative benefits when that party is in power, the investment starts to look a lot better. The probability of your party being in power after the 2000 elections was about .5; even if there is some uncertainty about whether that party can come through with favors the probability of that the investment is a success (and thus that the $46 on the dollar is collected) is quite high. At this ROI the puzzle is why firms don’t contribute much more.

Jayachandran resolves this puzzle by arguing that only a small part of the Jeffords reflects the quid pro quo postulated in the second explanation above. If 99% of the jump in market value of Democrat-supporting firms came about because a party with a favorable ideology came to power, and only 1% because bought politicians are coming to power, then the ROI from currying favor with politicians would be a much more reasonable 12%. This makes sense to me.

If this is true, then most of the variance in market value reaction to the Jeffords event should depend on industry (ie whether the industry is favored by Democratic or Republican policy) and a small amount should depend on the firm’s own political contributions. Indeed, columns 3-6 of Table 5 (in the published version) include industry fixed effects and find no effect of firm giving on market value. She interprets this as saying that “between-industry variation in donations and returns” accounts for much of the power of the main results. This is statistically accurate, but in this setting the donations are really standing in for something else — the correspondence of an industry’s political donations and the ideology of the parties. The complete story, it seems to me, is that

  • firms make soft-money donations almost entirely in order to curry favor with already-sympathetic legislators (or get them into office — an issue not addressed here), not to affect which party is in power;
  • industry-level donations therefore mirror the underlying correspondence between industry interests and party platforms; and
  • the Jeffords effect mostly reflects the fact that industries were differentially affected by the shift in political control.

In that sense the paper’s main result — the firm-level relationship between soft-money donations and stock price movements during the week of Jeffords’ defection — is almost completely an artifact of industry-level interests. I don’t mean to sound dismissive of the paper, because I think it’s outstanding, careful work, but I do come away thinking that Jayachandran didn’t quite follow her own evidence through to its logical conclusion.

What do lobbyists do? (part 1)

This question came up in a discussion I was having with a British friend about money in politics. I came up with a pretty vague answer (something about how they provided legislators with information and this wasn’t really my area), but I realized if I’m going to call myself a political scientist I need to have a better answer. So I dug a little deeper and here’s an initial report.

The clean, uncontroversial work that lobbyists do is to provide information to politicians. A lobbyist might provide a research report about the facts of an issue before the legislature, or perhaps some analysis of how voters and interests groups would react to different legislative outcomes. Of course this analysis will serve the interests of the lobbyist’s client, but I can’t really see an ethical problem with advocacy, and at any rate there is no practical way to limit this kind of advocacy and respect principles of free speech.

Lobbyists also of course use material means to influence legislators. Among the legal things that Abramoff did for his clients was to encourage them to provide campaign contributions to members of Congress (for example, Bob Ney) and/or their PACs. (The best source I found on Abramoff was this article in the Washington Post.) Abramoff also helped his clients give money to Christian activist Ralph Reed’s company, Century Strategies. I have no reason to think that this was itself illegal. (My sense is that Abramoff got in trouble mostly for stealing from his clients, not for the political influence he actually brought to bear.) From anecdotes like this, it is clear that the lobbyist’s role was to act as an intermediary between the interest group and political actors who have the power to get things done. (In this case, both Ney and Reed were able to support legislation that would help Abramoff’s clients, Indian tribes running a lucrative casino.) But this kind of behavior really can’t and in fact shouldn’t be stopped. One could argue that the rules about political contributions should be modified to make it harder for groups like the Indian casinos to give to politicians and activists, but it is hard to argue that intermediaries should (or could) be prevented from acting as a go-between for legal exchanges between donors and recipients.

Regulation in the wake of the Abramoff scandal has focused not on this matchmaker role but rather on the direct exchanges he and others made — the trips, meals, and other gifts that Abramoff used to cement the loyalty of Delay, Ney, and others. According to a guide to the House’s internal ethics rules, members of Congress previously faced limits on the value of gifts they could receive from any individual in the course of the year. Now, members of Congress are not allowed to receive anything at all from a “registered lobbyist, agent, or a foreign principal, or private entity that retains or employs such principals.” Gifts from other sources are subject to tighter value limits than those that applied before.

The constraint on gifts that can be given to members of Congress seems like a good idea because direct exchanges between lobbyists and legislators are unseemly. (The photo of Abramoff with Ney, Reed, and a few other staffers on a golf course in Scotland looks bad for everyone.) But it’s hard to believe that curtailing these kinds of gifts would diminish the impact of lobbyists. Interest groups want to influence policy, and as long as they have the right to contribute money to political causes, and as long as policymakers care about whether those contributions get made, there will be mutual gains from the kinds of exchanges that lobbyists can engineer. The gifts that have been banned by ethics rules represent one naive way of using interest group money to appeal to politicians. I don’t see how you can shut down the other, ultimately more powerful ways.

I need to look into what some of the current proposals are (floated by Obama and others) to cut down the influence of lobbyists. I suspect it’s just window dressing. For example, I believe that Obama’s campaign is not receiving campaign contributions from lobbyists, and criticized Hillary’s campaign for not doing the same. As with the gifts, a direct contribution from a lobbyist to a politician is merely the least imaginative way to curry favor, so eliminating these contributions can hardly be expected to have any effect on how policy gets made.

Oh right, this blog

For the past couple of months I’ve been wanting to start a blog in which I could record some thoughts that come out of my academic work, my programming, and other aspects of my life. I have outlets for statistics stuff (the Social Science Statistics blog at Harvard) and my work on ProxyDemocracy (at the ProxyDemocracy blog) but I found myself wishing I had a place to collect my thoughts and share ideas when those venues weren’t really appropriate.

I had literally forgotten about this blog, but I’m going to try to reinvigorate it. I might change the name and the url. Anyway, I really want to make this happen. I find blogging is such a good way to impose discipline on my thinking — not so much that I am paralyzed and unable to get anything done, but enough that I pursue ideas beyond the point of “hmm, that’s interesting” and to a more rewarding place where I start to have some useful and promising insights. So. More soon. 

Snyder et al on Malapportionment

“Left Shift,” chapter from upcoming book with Ansolabehere
“Equal Votes, Equal Money: Court-Ordered Redistricting and Public Expenditures in the American States,” Ansolabehere, Gerber, and Snyder, APSR, Dec 2002

These two papers assess the impact of court-ordered redistricting on state politics in the US. In the ten years following the landmark Baker vs. Carr decision of 1962, all US states redrew their congressional district boundaries to more closely approximate a “one-man, one-vote” rule. Malapportionment had become rampant in many states, with (at the extreme) a voter in a rural county in California having 400 times the voting power of a resident of Los Angeles. This had come about because levels of representation remained fixed while county populations changed, often drastically. It is easy to imagine that this would happen through neglect and inertia, but of course any system of representation creates a constituency that opposes change.

Both papers assess the impact of the Warren Court’s judicial intervention, but they differ in the effect they are looking for. The 2002 APSR paper examines the change in intra-state transfers (primarily state education distributions to the counties), which would be expected to change because more even representation across counties should lead to more even spending. “Left Shift” looks at changes in the level of state spending (such as the total expenditures on welfare and education), which would be expected to change if the more strongly represented counties preferred a different level of spending. Both papers manage to recover the effect theory would lead us to expect. (In both cases the authors state that previous work had failed to find these effects.)

The APSR paper appears to be mainly a product of data collection — the production of county level data on population, representation, spending, and demographics for both before and after the redistricting. The demographic data is of particular interest because it seems likely that other factors at least partly explain the changes in transfers they document. For example, perhaps the districts that gained representation also received more transfers because predominantly black counties had been underrepresented and blacks were also granted expanded social services in the wake of desegregation. Perhaps it was these other reforms directly benefiting blacks, rather than electoral reforms giving them more representation, that explain the increased spending on these counties. The authors address this by including “percent black” among their demographic variables, and relate that it does not affect their results: they say there is not a statistically significant interaction term between changes in malapportionment and percent black (presumably as measured in the 1960s) in a regression predicting county-level changes in spending. For me, this was not quite reassuring enough (considering that this would be the leading counterexplanation for their findings), but fairly convincing and surprising.

“Left Shift” takes on a complementary problem: did changes in representation lead to changes in policy? The prediction at the time of these reforms was apparently that state governments would adopt more liberal policies once the power of rural districts were curtailed, and Snyder and Ansolabehere report that previous work had not found this correlation. Their approach is to dig into the survey data to look at the changes in public opinion (as represented in the statehouse) that would result from reapportionment. What they find (using a variety of surveys) is that only in a subset of states did opinion vary geographically in the way the liberal advocates of reapportionment thought. In the Northeast, the Great Lakes states, and the coastal West (essentially, the future blue states), the overrepresented rural areas were more conservative and more likely to favor small government than the underrepresented cities and suburbs. In the rest of the country, the differences were smaller; in the South, suburban voters were among the most conservative. The effect of redressing urban/rural electoral imbalances on social spending is thus expected to vary from region to region. Indeed, Ansolabehere and Snyder report, policy did get more liberal in the “Left Shift” states and not in the remainder.

Here, the potential for confounding is large and not really addressed. The Left Shift states are essentially the blue states, as I mentioned above, and my impression was that there has been a divergence in political preferences on national politics between blue and red states. Blue state/red state polarization in national politics is something which clearly cannot be attributed to changes in the structure of representation in state legislatures. Perhaps political preferences in the blue states are diverging from those in the red states, perhaps due to economic conditions, geographic sorting, changes in the party platforms, or the political rise of the Christian right. If so, “Left Shift” can not get a good estimate of the effect of redistricting on state spending: the treatment is perfectly confounded, as they say in the causal effects literature.

Overall, I find these papers to be admirable in their marshaling of lots of interesting data that is appropriate to the hypotheses they want to assess. The strengths and weaknesses of the papers are an outgrowth of the policy intervention they study. The strength is that the units (states) did not choose to implement this reform, which alleviates a host of confounding problems. On the other hand, redistricting had no control group. (In “Left Shift” the authors try to argue that the non-Left Shift states are a control group in the sense that the policy should not affect them, but it is clear that there may be other confounding factors that applied differently to the treatment and control groups, rendering the estimation muddled.) What is admirable is the relative cleanness of the estimation, considering the substantive importance of the questions they examine.

Bergstrom, Rubinfeld and Shapiro, Econometrica 1982

“Micro-Based Estimates of Demand Functions for Local School Expenditures”

This paper uses the same Michigan survey data we saw in the Gramlich and Rubinfeld paper, but this one focuses on school expenditures and features a econometric technique to estimate a continuous demand function from a three category survey response.

The dependent variable in most of the Gramlich and Rubinfeld paper was per capita government spending, which had been backed out of survey responses where people were presented with the amount their municipality spent and asked to provide how much more or less they would like the municipality to spend. Here, respondents are asked what they think education spending should be: “more,” “less,” or “the same.” The econometric move the authors are hawking here appears to be an MLE approach to converting this categorical answer into something continuous. It looks to me like they assume that individuals in different municipalities have the same tastes (conditional on their individual characteristics), and they use the variation in the actual spending across municipalities to identify the width of their indifference band. In other words, the width of the indifference band is another parameter in the likelihood model (along with coefficients on individual characteristics, which determine the expected value of the underlying continuous demand for public goods).

The alternative would have been to estimate ordered probit or something like that, but this would not have made full use of the fact that actual expenditures vary across municipalities. The authors want to get a demand function out of this data, and this is how they’ll get it.

Gramlich and Rubinfeld, JPE 1982

“Micro Estimates of Public Spending Demand Functions and Tests of the Tiebout and Median-Voter Hypotheses”

In contrast to earlier studies (like Bergstrom and Goodman 1973), Gramlich and Rubinfeld attempt to estimate demand for public spending with a survey. They asked 2001 Michigan households to look at the current profile of spending in their county and assess whether they were happy with this level or would like more or less (in percentage points). What they found is that people were pretty happy with what they had: 2/3 of respondents in urban areas and 3/5 in rural areas asked for no change in spending. These are reasonable and fairly interesting results. As the authors point out, the fairly uniform apparent demand for public goods from citizens within communities stands in contrast to observed differences in the provision of public goods across communities.

The obvious interpretation, in my view, is that people have a cognitive bias in favor of the status quo. Here’s the experiment they should have run alongside this survey: give some people the wrong data (expenditure levels that actually are not accurate) and ask them to say what changes they would like to see. I expect that about the same proportion would say this level of spending was right, and about the same would say it was too low or too high. Of course, this does not get at the whole issue — what we’d really like to do is change the actual level of services people get and see what they think of those altered services, but this is of course not possible. At any rate, lab results in psychology and behavioral economics have confirmed that this sort of bias is rampant. [citation needed] I think any economics paper written today would have to at least mention this possibility.

Gramlich and Rubinfeld do not mention this possibility, and spend the paper instead considering three alternative explanations, one of which I find to be pretty ridiculous and the other two plausible and probably as important as cognitive biases. The first explanation they consider is that the rich actually do want a lot more public goods than the poor do, but they appear to be as satisfied with the status quo as the poor because they actually consume a larger proportion of public goods than do the poor. Public spending, in other words, is “pro-rich.” The authors trot out some evidence from other studies (none of which I’ve seen) arguing that spending on schools and other public services within cities is skewed toward the rich. I can’t evaluate these studies but I can say that this did not match what I thought; in fact I thought spending across districts was actually skewed toward poor schools in many cases (although in most states rich districts are able to spend more than poor districts), and that lawsuits would prevent there from being very much of a distinction in spending within districts. Certainly quality varies highly (was my prior) but not spending. Anyway, I find this argument to be the kind of thing that doesn’t make sense unless you’re thinking about marginal rates of substitution between public and private goods, and something so basic that doesn’t make sense unless you have the same stock economic model in mind is probably not right.

The second and more plausible explanation is that people have already done a lot of sorting, and they like what they get because they chose to live there. I really can’t argue with this explanation, and I can’t think of any particularly good way to test it. The experiment of giving people inaccurate spending profiles and asking how they would change it would only address part of the issue. The third explanation, also plausible, is that we are in or near political equilibrium, and that people are satisfied because their political system has provided the median preferred level of public goods.

My sense was that the paper establishes that some combination of cognitive bias, sorting, and democracy have left people pretty happy with their public goods expenditures, but we don’t really know much beyond that.

Romer and Rosenthal, JPE 1979

“The Elusive Median Voter”

Romer and Rosenthal’s main point is that empirical evidence of the median voter theorem is not as solid as people think (or thought, in 1979). Existing studies (like that of Bergstrom and Goodman 1973) had demonstrated that expenditures across political units were correlated with characteristics of the median voter, particularly his income and tax price. But this would be the finding even under plausible alternatives to the median voter theorem.

Their critique centers on two problems they see in existing literature on the median voter theorem:

  • the multiple fallacy: It is not clear from existing studies whether the median voter gets what he wants or some multiple of what he wants.
  • the fractile fallacy: It is not clear if the median is the pivotal voter or a voter at some other place in the distribution is the median voter.

Romer and Rosenthal present one alternative to the median voter model that demonstrates how it could be the case that the median is decisive but gets a multiple of what he wants. In a bureaucratic threat model, the bureaucracy is able to force the electorate to choose between a status quo and an alternative. The bureaucracy may be able to maintain the status quo by presenting an alternative that is sufficiently unsatisfying to the median voter. A more straightforward example could be that there are competing parties that do not converge on the median voter for some reason (the literature has provided a number of them, such as directional voting or the need to be generate turnout by providing distinct alternatives), such that the median is pivotal but he again effectively chooses between two somewhat dissatisfying alternatives.
These are useful criticisms. Of note are the mentions in the paper to survey-based research that does a better job than Bergstrom and Goodman at estimating individual demand functions for public goods. Still missing it seems are natural experiment-based approaches to estimating the responsiveness of policy to the median voter, such as Lott and Kenny’s work on the effect of female suffrage on social spending.

Bergstrom and Goodman, AER 1973

“Private Demands for Public Goods”

This paper tries to use the median voter theorem to estimate the parameters of an individual demand function for public goods. The authors rely heavily on the median voter theorem as an assumption, and in fact spend very little time justifying its use or using their own results to assess its plausibility. They specify a functional form for individual demand for public goods, they postulate that observed levels of municipal spending reflect the preferences of the median voter, and then they use the parameters of their fitted model to draw inferences about individual demand for public goods, most notably that consumers seem to view these goods as essentially private. In general my impression is that they are asking too much of this data. The functional form and choice of control variables both seem likely to have a large impact on their estimated parameters, and there is a lot of leeway in choosing both (and no sensitivity tests demonstrating how their estimates change with reasonable modifications to their model). Their structural model approach is audacious but ultimately fails to convince me that I should pay a lot of attention to their estimates, particularly their estimate of the publicness of public goods. (They combine two imprecisely estimated parameters into a single crowding parameter to find that most public goods are in fact private, in the sense that there are no advantages to sharing them on a larger scale in the range of municipalities they consider.)

If we step back from their more rococo modeling endeavors, there are interesting data and correlations to be found here. At a minimum, the authors have provided evidence that policy responds to citizen preferences by showing that expenditures are higher where the median voter has a relatively smaller share of the bill. This is what we would expect from a representative democracy.

True, there are certainly omitted variables involved that make it hard to know whether even this is true. Cities where the median voter pays a lower amount of taxes are different in ways that almost certainly are not properly modeled by their controls. Cities where the median pays a lower proportion of property tax bill may have more inequality (think of what happens to your proportion of property tax revenues when Bill Gates builds a 100 million dollar house next door), and this might lead to more expenditures because the rich have power in the government and get what they want. (This would be a polity in which the median voter theorem does not apply.) The median homeowner’s effective tax price would also be lower in a city with a lot of commercial and industrial development, and again municipal expenditures could be higher here because such places have more crime, or because the industrial interests have captured the government and want the government to provide public goods that benefit them such as transportation infrastructure, security, or beautification.

I interpret this paper as part of an empirical project to confirm that democratic government is
giving us the policies we want it to (kind of the empirical companion to Downs, but engaging in the tradition of assumption-laden structural estimation that produces estimates that are a little hard to believe.

Word Frequencies in Ruby

I just started working with Kevin Quinn on a large project focused on applying techniques from unsupervised learning to analyze the content of political speech. Here is a link to the first article to come out of this project, which analyzes the congressional record to assign floor speeches to categories based on the vector of word frequencies of each speech. Charmingly, the categories that result correspond to categories we might create as we grouped speeches (defense spending, education, etc).

I read that paper last summer and was very impressed and inspired by it. Now I’m lucky to be working on the project myself. (I really mean lucky — I did very little to deserve this, and in fact appeared to do my best to be passed over by not contacting Kevin in the fall even after he asked me to get in touch.)

My first task was to produce a matrix of word frequencies for a set of New York Times articles Kevin provided me: rows are words (or word stems) and columns are articles, so representative entry f_{w,a} is the number of times word w appeared in article a. I did my best to code this up in good OOP style. For me, this basically meant thinking a little about what conceptual objects I was dealing with (Articles and Corpuses, was what came to mind) and then looking for ways to wrap any “top-level” code that was left into these or other classes. The core of my solution is an Article class, each instance of which has a title and text, and that knows how to produce a hash of its word frequences, like

{“and” => 4, “mother” => 2, etc},

I also have a Corpus class, each instance of which has an array of Article objects. A Corpus knows how to produce a matrix of word frequencies for its Articles. Finally, we have a DirectoryOfTexts object, each instance of which has a directory location with texts in it, and which knows how to make an array of Article objects out of those texts that can be transformed into a Corpus. So the pseudocode is basically:

d = DirectoryOfTexts.new(“path to directory with new york times articles in it”)
c = Corpus.new(d.make_array_of_article_objects)

And that produces the csv of word frequencies for this articles in this directory.

One little trick that I employed was to set a default value for my word frequency hashes, such that

h = Hash.new(0)

will return a value of 0 when I give it a key that it doesn’t have. This was useful in producing the matrix of word frequencies, because I could basically produce an array of unique words from all the word frequency hashes, and for each element of the matrix, just ask for


and get a zero instead of an error in cases where the word was not in that articler

I found it very satisfying to think this through and work up the code, and I had a couple of thoughts:

1. I want to read more code that has good OOP style, and maybe read something on the topic. I feel like I’ve stumbled into some good practices but could speed things along by reading more good code and good theory. I should try to read some good code every day.

2. I wanted to explain to non-programmers what was so cool about this style of getting things done. I wondered again whether there are any good books or essays out there about the type of thinking that programming requires of you, or something that explains the zen of programming to a popular audience. It’s something I’ve thought about since taking lab electronics in college and even more since doing and teaching programming since come to grad school.