What do lobbyists do? (part 1)


This question came up in a discussion I was having with a British friend about money in politics. I came up with a pretty vague answer (something about how they provided legislators with information and this wasn’t really my area), but I realized if I’m going to call myself a political scientist I need to have a better answer. So I dug a little deeper and here’s an initial report.

The clean, uncontroversial work that lobbyists do is to provide information to politicians. A lobbyist might provide a research report about the facts of an issue before the legislature, or perhaps some analysis of how voters and interests groups would react to different legislative outcomes. Of course this analysis will serve the interests of the lobbyist’s client, but I can’t really see an ethical problem with advocacy, and at any rate there is no practical way to limit this kind of advocacy and respect principles of free speech.

Lobbyists also of course use material means to influence legislators. Among the legal things that Abramoff did for his clients was to encourage them to provide campaign contributions to members of Congress (for example, Bob Ney) and/or their PACs. (The best source I found on Abramoff was this article in the Washington Post.) Abramoff also helped his clients give money to Christian activist Ralph Reed’s company, Century Strategies. I have no reason to think that this was itself illegal. (My sense is that Abramoff got in trouble mostly for stealing from his clients, not for the political influence he actually brought to bear.) From anecdotes like this, it is clear that the lobbyist’s role was to act as an intermediary between the interest group and political actors who have the power to get things done. (In this case, both Ney and Reed were able to support legislation that would help Abramoff’s clients, Indian tribes running a lucrative casino.) But this kind of behavior really can’t and in fact shouldn’t be stopped. One could argue that the rules about political contributions should be modified to make it harder for groups like the Indian casinos to give to politicians and activists, but it is hard to argue that intermediaries should (or could) be prevented from acting as a go-between for legal exchanges between donors and recipients.

Regulation in the wake of the Abramoff scandal has focused not on this matchmaker role but rather on the direct exchanges he and others made — the trips, meals, and other gifts that Abramoff used to cement the loyalty of Delay, Ney, and others. According to a guide to the House’s internal ethics rules, members of Congress previously faced limits on the value of gifts they could receive from any individual in the course of the year. Now, members of Congress are not allowed to receive anything at all from a “registered lobbyist, agent, or a foreign principal, or private entity that retains or employs such principals.” Gifts from other sources are subject to tighter value limits than those that applied before.

The constraint on gifts that can be given to members of Congress seems like a good idea because direct exchanges between lobbyists and legislators are unseemly. (The photo of Abramoff with Ney, Reed, and a few other staffers on a golf course in Scotland looks bad for everyone.) But it’s hard to believe that curtailing these kinds of gifts would diminish the impact of lobbyists. Interest groups want to influence policy, and as long as they have the right to contribute money to political causes, and as long as policymakers care about whether those contributions get made, there will be mutual gains from the kinds of exchanges that lobbyists can engineer. The gifts that have been banned by ethics rules represent one naive way of using interest group money to appeal to politicians. I don’t see how you can shut down the other, ultimately more powerful ways.

I need to look into what some of the current proposals are (floated by Obama and others) to cut down the influence of lobbyists. I suspect it’s just window dressing. For example, I believe that Obama’s campaign is not receiving campaign contributions from lobbyists, and criticized Hillary’s campaign for not doing the same. As with the gifts, a direct contribution from a lobbyist to a politician is merely the least imaginative way to curry favor, so eliminating these contributions can hardly be expected to have any effect on how policy gets made.

Oh right, this blog

For the past couple of months I’ve been wanting to start a blog in which I could record some thoughts that come out of my academic work, my programming, and other aspects of my life. I have outlets for statistics stuff (the Social Science Statistics blog at Harvard) and my work on ProxyDemocracy (at the ProxyDemocracy blog) but I found myself wishing I had a place to collect my thoughts and share ideas when those venues weren’t really appropriate.

I had literally forgotten about this blog, but I’m going to try to reinvigorate it. I might change the name and the url. Anyway, I really want to make this happen. I find blogging is such a good way to impose discipline on my thinking — not so much that I am paralyzed and unable to get anything done, but enough that I pursue ideas beyond the point of “hmm, that’s interesting” and to a more rewarding place where I start to have some useful and promising insights. So. More soon. 

Snyder et al on Malapportionment

“Left Shift,” chapter from upcoming book with Ansolabehere
“Equal Votes, Equal Money: Court-Ordered Redistricting and Public Expenditures in the American States,” Ansolabehere, Gerber, and Snyder, APSR, Dec 2002

These two papers assess the impact of court-ordered redistricting on state politics in the US. In the ten years following the landmark Baker vs. Carr decision of 1962, all US states redrew their congressional district boundaries to more closely approximate a “one-man, one-vote” rule. Malapportionment had become rampant in many states, with (at the extreme) a voter in a rural county in California having 400 times the voting power of a resident of Los Angeles. This had come about because levels of representation remained fixed while county populations changed, often drastically. It is easy to imagine that this would happen through neglect and inertia, but of course any system of representation creates a constituency that opposes change.

Both papers assess the impact of the Warren Court’s judicial intervention, but they differ in the effect they are looking for. The 2002 APSR paper examines the change in intra-state transfers (primarily state education distributions to the counties), which would be expected to change because more even representation across counties should lead to more even spending. “Left Shift” looks at changes in the level of state spending (such as the total expenditures on welfare and education), which would be expected to change if the more strongly represented counties preferred a different level of spending. Both papers manage to recover the effect theory would lead us to expect. (In both cases the authors state that previous work had failed to find these effects.)

The APSR paper appears to be mainly a product of data collection — the production of county level data on population, representation, spending, and demographics for both before and after the redistricting. The demographic data is of particular interest because it seems likely that other factors at least partly explain the changes in transfers they document. For example, perhaps the districts that gained representation also received more transfers because predominantly black counties had been underrepresented and blacks were also granted expanded social services in the wake of desegregation. Perhaps it was these other reforms directly benefiting blacks, rather than electoral reforms giving them more representation, that explain the increased spending on these counties. The authors address this by including “percent black” among their demographic variables, and relate that it does not affect their results: they say there is not a statistically significant interaction term between changes in malapportionment and percent black (presumably as measured in the 1960s) in a regression predicting county-level changes in spending. For me, this was not quite reassuring enough (considering that this would be the leading counterexplanation for their findings), but fairly convincing and surprising.

“Left Shift” takes on a complementary problem: did changes in representation lead to changes in policy? The prediction at the time of these reforms was apparently that state governments would adopt more liberal policies once the power of rural districts were curtailed, and Snyder and Ansolabehere report that previous work had not found this correlation. Their approach is to dig into the survey data to look at the changes in public opinion (as represented in the statehouse) that would result from reapportionment. What they find (using a variety of surveys) is that only in a subset of states did opinion vary geographically in the way the liberal advocates of reapportionment thought. In the Northeast, the Great Lakes states, and the coastal West (essentially, the future blue states), the overrepresented rural areas were more conservative and more likely to favor small government than the underrepresented cities and suburbs. In the rest of the country, the differences were smaller; in the South, suburban voters were among the most conservative. The effect of redressing urban/rural electoral imbalances on social spending is thus expected to vary from region to region. Indeed, Ansolabehere and Snyder report, policy did get more liberal in the “Left Shift” states and not in the remainder.

Here, the potential for confounding is large and not really addressed. The Left Shift states are essentially the blue states, as I mentioned above, and my impression was that there has been a divergence in political preferences on national politics between blue and red states. Blue state/red state polarization in national politics is something which clearly cannot be attributed to changes in the structure of representation in state legislatures. Perhaps political preferences in the blue states are diverging from those in the red states, perhaps due to economic conditions, geographic sorting, changes in the party platforms, or the political rise of the Christian right. If so, “Left Shift” can not get a good estimate of the effect of redistricting on state spending: the treatment is perfectly confounded, as they say in the causal effects literature.

Overall, I find these papers to be admirable in their marshaling of lots of interesting data that is appropriate to the hypotheses they want to assess. The strengths and weaknesses of the papers are an outgrowth of the policy intervention they study. The strength is that the units (states) did not choose to implement this reform, which alleviates a host of confounding problems. On the other hand, redistricting had no control group. (In “Left Shift” the authors try to argue that the non-Left Shift states are a control group in the sense that the policy should not affect them, but it is clear that there may be other confounding factors that applied differently to the treatment and control groups, rendering the estimation muddled.) What is admirable is the relative cleanness of the estimation, considering the substantive importance of the questions they examine.

Bergstrom, Rubinfeld and Shapiro, Econometrica 1982

“Micro-Based Estimates of Demand Functions for Local School Expenditures”

This paper uses the same Michigan survey data we saw in the Gramlich and Rubinfeld paper, but this one focuses on school expenditures and features a econometric technique to estimate a continuous demand function from a three category survey response.

The dependent variable in most of the Gramlich and Rubinfeld paper was per capita government spending, which had been backed out of survey responses where people were presented with the amount their municipality spent and asked to provide how much more or less they would like the municipality to spend. Here, respondents are asked what they think education spending should be: “more,” “less,” or “the same.” The econometric move the authors are hawking here appears to be an MLE approach to converting this categorical answer into something continuous. It looks to me like they assume that individuals in different municipalities have the same tastes (conditional on their individual characteristics), and they use the variation in the actual spending across municipalities to identify the width of their indifference band. In other words, the width of the indifference band is another parameter in the likelihood model (along with coefficients on individual characteristics, which determine the expected value of the underlying continuous demand for public goods).

The alternative would have been to estimate ordered probit or something like that, but this would not have made full use of the fact that actual expenditures vary across municipalities. The authors want to get a demand function out of this data, and this is how they’ll get it.

Gramlich and Rubinfeld, JPE 1982

“Micro Estimates of Public Spending Demand Functions and Tests of the Tiebout and Median-Voter Hypotheses”

In contrast to earlier studies (like Bergstrom and Goodman 1973), Gramlich and Rubinfeld attempt to estimate demand for public spending with a survey. They asked 2001 Michigan households to look at the current profile of spending in their county and assess whether they were happy with this level or would like more or less (in percentage points). What they found is that people were pretty happy with what they had: 2/3 of respondents in urban areas and 3/5 in rural areas asked for no change in spending. These are reasonable and fairly interesting results. As the authors point out, the fairly uniform apparent demand for public goods from citizens within communities stands in contrast to observed differences in the provision of public goods across communities.

The obvious interpretation, in my view, is that people have a cognitive bias in favor of the status quo. Here’s the experiment they should have run alongside this survey: give some people the wrong data (expenditure levels that actually are not accurate) and ask them to say what changes they would like to see. I expect that about the same proportion would say this level of spending was right, and about the same would say it was too low or too high. Of course, this does not get at the whole issue — what we’d really like to do is change the actual level of services people get and see what they think of those altered services, but this is of course not possible. At any rate, lab results in psychology and behavioral economics have confirmed that this sort of bias is rampant. [citation needed] I think any economics paper written today would have to at least mention this possibility.

Gramlich and Rubinfeld do not mention this possibility, and spend the paper instead considering three alternative explanations, one of which I find to be pretty ridiculous and the other two plausible and probably as important as cognitive biases. The first explanation they consider is that the rich actually do want a lot more public goods than the poor do, but they appear to be as satisfied with the status quo as the poor because they actually consume a larger proportion of public goods than do the poor. Public spending, in other words, is “pro-rich.” The authors trot out some evidence from other studies (none of which I’ve seen) arguing that spending on schools and other public services within cities is skewed toward the rich. I can’t evaluate these studies but I can say that this did not match what I thought; in fact I thought spending across districts was actually skewed toward poor schools in many cases (although in most states rich districts are able to spend more than poor districts), and that lawsuits would prevent there from being very much of a distinction in spending within districts. Certainly quality varies highly (was my prior) but not spending. Anyway, I find this argument to be the kind of thing that doesn’t make sense unless you’re thinking about marginal rates of substitution between public and private goods, and something so basic that doesn’t make sense unless you have the same stock economic model in mind is probably not right.

The second and more plausible explanation is that people have already done a lot of sorting, and they like what they get because they chose to live there. I really can’t argue with this explanation, and I can’t think of any particularly good way to test it. The experiment of giving people inaccurate spending profiles and asking how they would change it would only address part of the issue. The third explanation, also plausible, is that we are in or near political equilibrium, and that people are satisfied because their political system has provided the median preferred level of public goods.

My sense was that the paper establishes that some combination of cognitive bias, sorting, and democracy have left people pretty happy with their public goods expenditures, but we don’t really know much beyond that.

Bergstrom and Goodman, AER 1973

“Private Demands for Public Goods”

This paper tries to use the median voter theorem to estimate the parameters of an individual demand function for public goods. The authors rely heavily on the median voter theorem as an assumption, and in fact spend very little time justifying its use or using their own results to assess its plausibility. They specify a functional form for individual demand for public goods, they postulate that observed levels of municipal spending reflect the preferences of the median voter, and then they use the parameters of their fitted model to draw inferences about individual demand for public goods, most notably that consumers seem to view these goods as essentially private. In general my impression is that they are asking too much of this data. The functional form and choice of control variables both seem likely to have a large impact on their estimated parameters, and there is a lot of leeway in choosing both (and no sensitivity tests demonstrating how their estimates change with reasonable modifications to their model). Their structural model approach is audacious but ultimately fails to convince me that I should pay a lot of attention to their estimates, particularly their estimate of the publicness of public goods. (They combine two imprecisely estimated parameters into a single crowding parameter to find that most public goods are in fact private, in the sense that there are no advantages to sharing them on a larger scale in the range of municipalities they consider.)

If we step back from their more rococo modeling endeavors, there are interesting data and correlations to be found here. At a minimum, the authors have provided evidence that policy responds to citizen preferences by showing that expenditures are higher where the median voter has a relatively smaller share of the bill. This is what we would expect from a representative democracy.

True, there are certainly omitted variables involved that make it hard to know whether even this is true. Cities where the median voter pays a lower amount of taxes are different in ways that almost certainly are not properly modeled by their controls. Cities where the median pays a lower proportion of property tax bill may have more inequality (think of what happens to your proportion of property tax revenues when Bill Gates builds a 100 million dollar house next door), and this might lead to more expenditures because the rich have power in the government and get what they want. (This would be a polity in which the median voter theorem does not apply.) The median homeowner’s effective tax price would also be lower in a city with a lot of commercial and industrial development, and again municipal expenditures could be higher here because such places have more crime, or because the industrial interests have captured the government and want the government to provide public goods that benefit them such as transportation infrastructure, security, or beautification.

I interpret this paper as part of an empirical project to confirm that democratic government is
giving us the policies we want it to (kind of the empirical companion to Downs, but engaging in the tradition of assumption-laden structural estimation that produces estimates that are a little hard to believe.

Hard-earned 1890 tax dollars went to this. . .

I am doing a project involving data from the 1890 agricultural census. One of the variables I won’t be using:

LAMBSKIL NUMBER OF LAMBS KILLED BY DOGS, 1889
Type: numeric Number of lambs killed by dogs, 1889.
U.S. Bureau of the Census [1895b], Table 8.

This data is provided by county. In my home county of Monroe, New York State, there were apparently 402 lambs killed by dogs in 1889.

In Humboldt County, CA, where Eureka is, there were over 8,000 sheep killed by dogs that year.

I can’t believe they were collecting this kind of thing.

Is there some badass research question hiding in this data about sheep and dogs? Any aspiring Steven Levitts are invited to chime in.

Browsing without a mouse

My roommate John had mentioned some Linux feature he had found that allowed for mouseless browsing: when you pressed a key, each of the links on a webpage would appear with a letter next to it, and you could follow a link by pressing that letter. No using the mouse. This morning I found a few Firefox extensions that allow this kind of browsing, and although I’ve only been using Hit-a-Hint for about five minutes I’m already hooked. The other one, the aptly named Mouseless Browsing, looks good too — it looks to me like it has more settings you can tweak. But by the time I had found MB I had already installed HaH, and I like the default behavior of HaH so I’ll stick with it for now.

As part of my delayed but accelerating descent into geekdom, I’ve come to understand this aversion to the mouse. I think that people really get nuts about keyboard shortcuts through some combination of a) using the computer enough for it to be important, and b) getting comfortable enough with their software to want to understand non-necessary but useful things like keyboard shortcuts. I’m increasingly there on both counts. Plus once you start trying to not use the mouse, it gets to be kind of an obsession. The less frequently you reach for the mouse the more you wonder whether there is a keyboard shortcut for that too.

In reading today’s Lifehacker, where Wendy had asked people to mention their favorite shortcuts, I was reminded of that Onion opinion piece where the guy was going on and on about the usefulness of keyboard shortcuts. I would never do that, would I?

Deep into Python

The past week or so I’ve been diving deep into Python. I’m trying to learn text processing techniques so that I can assemble data through screen scraping, both for research and for my voting recommendation project. For example, the first thing I’d really like to be able to do is parse SEC documents on the web in order to assemble a database of mutual fund proxy voting. There is a lot of data out there, and I’m tired of giving up if I can’t find it in a nice table somewhere (or assembling it in annoying ways like manual data entry). Python is going to help me fix this.

After some deliberation about whether to learn Ruby or Python, I settled on Python, largely because I heard the libraries were somewhat better developed and, more importantly, my computer scientist roommate appears to be a Python wizard and seems to like helping me along. So I started with the O’Reilly Book Learning Python, changed over to the Magnus Lie Hetland book Beginning Python, and am now starting up on the David Mertz book Text Processing in Python. I can report that the Hetland book is the best at getting off the ground — much more engaging than the O’Reilly book — but I found that Hetland’s examples/problems were a little too involved, in ways that seemed a little too obscure, for my speed. The “Regular Expressions HOWTO” was a great way to solidify my understanding of regular expressions, and the Mertz book looks like a good way to extend things a little further. I’ll try to report back on my progress.

Overall, I can say that I am really enjoying this. I am getting that euphoric feeling that comes with rapid progress at the beginning of pretty much anything (for me, particularly languages and musical instruments). But also it’s clear that this stuff is really useful for what I want to do, and opens up a lot of possibilities. I love it that you can write a few lines of code and extract email addresses from some webpage somewhere. Not that I’m ready to become a spammer or anything, but I can see how this is bringing me closer to being able to assemble information for people in a useful way, which is pretty much the goal.