To sum up, we've had a lot of conflicting calculations of how often Justice Butler has and has not ruled for defendants in criminal cases. Contrary to the assertions of Illusory Tenant and other members of the defense bar, academics calculate numbers like these all the time for various purposes.
This type of analysis is not counting cases as "right" or "wrong." It's not, despite loose use of language, even counting them "for" or "against" Justice Butler. It isn't, although some slip into the language, even counting cases as "pro" or "anti" "criminal."
Empirical studies like this are not going to capture everything that more traditional doctrinal study (i.e., analyzing the arguments for or against a position) does and it can't replace it. But if you look at enough cases in an evenhanded way, it can reveal a pattern that reveals doctrinal differences among justices.
"Among" is an important word. The key is not whether Louis Butler voted to uphold the claim of a defendant 25%, 48% or 58% of the time. We can think that any one of these numbers sounds high or low, but how are we really to know? Maybe we have a state full of blundering constables and the Supreme Court has needed to clean house.
This is where (and now I am going to cause my lefty readers to spasm again), Jessica McBride advanced the conversation. She she did a comparative analysis. That analysis showed that Justice Butler voted to uphold at least one claim of a criminal defendant in a case 2 to 3 times as often as Justices Prosser, Roggensack and Wilcox. Even though I adjusted her number for Justice Butler downward a bit, from 58% to 49% - those stark differences remain.
That tells us something and neither IT nor any of the hot and bothered anonymous commenters here have said one thing to contradict that. We can all differ on what it tells us. Maybe Justice Butler has a more enlightened view of the civil liberties of the accused. Maybe he reads those civil liberties too expansively at the expense of law enforcement. But one thing is sure. He reads them differently than Justices Prosser and Roggensack.
They have criticized the methodology. IT says that the cases have "a number of subtleties that Prof. Esenberg's suggested methodology simply can't take into consideration." True and, he didn't even have to say so, because I did in my first post on all of this. But, for these purposes, it doesn't matter. Sometimes we want to hone in on details. Sometimes we want to step back and see the big picture. Here, we see Louis Butler is three times as likely to uphold a claim of a criminal defendant that Patience Roggensack. Obviously, Justices Butler and Roggensack see things differently. Yes, there were certainly subtleties in all of these cases but we treated both Justices Roggensack and Butler the same and we had enough cases that our big picture is still valid.
But shouldn't we count how many convictions were upheld. That's the point of prosecutions, isn't it? We certainly could do an analysis comparing convictions upheld and overturned. But there are problems with that. First, a lot of significant cases don't raise the question of upholding or overturning a conviction. The Mark Jensen case is an example. Second, if you do this analysis, you can't do it like the Butler campaign did it. You have to limit yourself to cases where the court actually addressed whether or not to overturn the convictions. No base hit if you don't come to bat. Third, it really doesn't make a lot of sense to count up the convictions when each conviction does not represent an actual decison point. If I decide to throw out one piece of evidence that overturns six convictions, did I really make six decisions?
And let's say that we an analysis of convictions. We would then need to do what McBride did. We'd need to compare all of the members of the court, Let's say Justice Butler overturned 25% of the convictions or, as I think may be more likely, something around 40%. Is that too high? Too low? How does he compare to the others?
It seems reasonable to assume that the comparison is going to look a lot like McBride's. It makes little sense to think the Justice Butler is three times as likely to rule in favor of a defendant's claims generally but no more likely to overturn a conviction. But unless you do this work, I don't think the conviction number means much.
But, they say, some of these "pro-defendant rulings" are weak tea. All the defendant got is some discovery or a new hearing. Again, that doesn't matter unless we think that Justice Butler is more likely to grant defendants minor relief yet no more likely to grant major relief. That does not sound plausible.
Finally, why did I spend time on this? Part of the reason is that I have been formulating some empirical research on the court and so this kind of thing has been on my mind. But, more importantly, there are differences in judicial philosophy that ought to be discussed. There isn't, in fact, "no one here but us Scalias" and the claim that there aren't significant differences on these issues misleads the public just as much as oversimplification of the issues in some of the particular cases.
17 comments:
Let's say Justice Butler overturned 25% of the convictions or, as I think may be more likely, something around 40%. Is that too high?
Yes. It's 29%.
Agreed, Rick.
It's the DIFFERENTIAL analysis (Butler & Shirley vs. Roggensack) that counts.
Very meaningful and not bogged down in details and nuance.
Nope, we sure can't do with those pesky details and nuance in constitutional law. I remember my first con law casebook. 1300 pages or so, 30 of which contained the Constitution itself.
And the other 1275 pages just barely scratched the surface, as it turns out.
This type of analysis is not counting cases as "right" or "wrong." It's not, despite loose use of language, even counting them "for" or "against" Justice Butler. It isn't, although some slip into the language, even counting cases as "pro" or "anti" "criminal."
Perhaps then someone should tell Ms. McBride to bite her tongue much harder; she repeatedly suggests she's measuring "pro-criminal" voting activity and sums up her path-breaking study this way: "The percentage I get for Butler is 57 percent pro criminal." The uninitiated might think this not the "slip of the tongue" of a detached academic but the expression of a partisan advocate determined to influence the outcome of an election. And might further think that a "study" designed and carried out by such a bias-ladened person ought to be viewed very skeptically. I mention this only because I can't tell at this point how much cross-pollination has occurred between McBride's and Professor Esenberg's respective (but apparently shared) conclusions.
Sometimes we want to step back and see the big picture. Here, we see Louis Butler is three times as likely to uphold a claim of a criminal defendant that Patience Roggensack.
This formulation of the matter does get to the nub of the dispute. Professor Esenberg says he (and/or McBride) is measuring relative propensity to rule in favor of a defendant's "claim." (McBride does away with the pretense of academic detachment and starkly says she's measuring a Justice's propensity to be "pro-criminal." Her honesty's refreshing.) If "claim" rather than "conviction" is to be the measurement, then fine; long as it's studied in a systematic rather than haphazard way.
The tests (to my understanding) have been run against a single data set (the lump sum of all "criminal" appeals over the course of the past several terms). No attempt at stratification has been made, and as a result we have no refined sense of what the results might mean. (No, I take that back: the Butler camp made an effort to stratify, along the lines of affirmance or reversal of a conviction. I'd say that this is an important data point even if it's not considered paramount, and I'd urge Professor Esenberg to include it in any further study he undertakes.)
The sole "unit" being measured, then, is defendant's claim. Stratification problems remain, though. Say a Justice affirms one intentional homicide appeal but in another separate appeal rules that a defendant is entitled to see his pre-sentence report so that he can prepare a response to his attorney's no-merit report. That's a rate of 50% "agreement with defendant's claims." Intuitively we know that that result is loopy: the two separate outcomes simply aren't comparable. Some way will have to be found to assign appropriate weight to each sort of case. Frankly, I think that highly unlikely, but it's undoubtedly a worthy academic exercise.
The answer might be that over time enough cases enter the appellate pipeline that the sort of distortion I just identified will be smoothed out by sheer dint of the numbers. However, I'm skeptical. Because the supreme court self-selects the issues (review is matter of discretion rather than right), then by definition you can't rely on random variation to smooth out distortions. In fact, a fair number of cases deal with dry, technical, purely procedural problems. McBride herself acknowledges that "many of the case (in her chosen data set) involved narrow issues" that didn't challenge the conviction.
Moreover, an "against defendant" ruling may well mask a significant underlying victory to the defendant. When the court finds error, but deems it harmless, it's arbitrary to put it in the "against" column. Professor Esenberg seeks, in his words, to measure whether one Justice is "taking a more expansive view of the rights of criminal defendants than others[.]" A ruling squarely in favor of error surely satisfies that metric (if it doesn't, then the enterprise is quite arbitrary, in my opinion). So what if the Justice goes on to find it harmless? There has been an underlying expansion of rights to which some weight must be given. (As an aside, I don't know how many harmless error error results were handed down during the time frame we're discussing. A pure guess would be 6 or 7 -- but if that's right, it'd be about 10% of the entire data set, a considerable number.) The same for a determination in an ineffective-assistance appeal in which the court finds deficient performance but declines to find prejudice.
It works the other way, too. On occasion, the court will declare relief to the immediate appellant but then rule that the very ground for relief is henceforth no longer viable. Rare, but it happens. Example: State v. Campbell, 2006 WI 99 [the abolition of the year-and-a-day rule, after giving Campbell its dying benefit). That case probably went in the "for defendant" column in McBride's tabulation, but why should it if the purpose is to measure the expansiveness of a Justice's view of defendant's rights? After all, the Justices eliminated that right.
I don't want to belabor this (much) further. As Professor Esenberg knows, I think very little of McBride's effort. I don't want this discussion to reprise that particular beef. Rather, I want to stress my view that on top of other defects all she's done is taken a lumpy set of data and molded it the way she prefers. Professor Esenberg, I know, disagrees. But surely he agrees that in order to assign any refined meaning, the data must first be organized in a refined way. This is how Professor Esenberg puts it:
What do these numbers mean? ... My own view is that these differences reflect different philosophical bents. Justices who rule more frequently for criminal defendants may be more skeptical of law enforcement. They may balance the tension between public safety and the desire to avoid wrongful convictions differently than justices who are less likely to rule in this way. They may adopt interpretive techniques that privilege the claims of the accused or that are more likely to credit claims made by groups such as the Wisconsin Innocence Project.
To which I say: it's a working hypothesis and might be susceptible to proof. But McBride's effort, which fails to stratify at all, doesn't do it. (Is there any correlation between a vote in favor of enforcing an obvious statutory right to PSI access and one in favor of Innocence Project claims or diminution of "public safety"? I doubt one can be shown, which is one problem among a number, but McBride certainly hasn't shown it. And if there isn't, why give them the same weight?) Much, much thought will have to go into a study design. Mr. Tenant's lapidarian case-by-case effort is one such worthy effort, even if it takes editorial rather than number-crunching form. If Professor Esenberg does undertake his own such an effort I hope he adequately refines the data -- that's really the burden of this post.
I want to offer my thanks to Professor Esenberg for expending his resources for open public debate on this important topic.
Great stuff as usual, Atty. Tyroler.
One final point. I see now that the "analysis" -- such as it is -- has lately moved from Butler's own record to Butler's record in comparison to the other Justices.
Only two problems. In the so-called comparative analysis, Butler's "pro-criminality" remains at 58% "pro-criminal." This figure is derived from a comically faulty reading of the law and apparently deliberate misuse of endlessly shifting criteria.
In other words, the figure is comparable to those of the other Justices only if their requisite percentages were similarly derived.
To put it mildly, it strains credulity to expect anyone to believe, that someone, in a day or two, has objectively sifted through each decision and determined from the various majority opinions, partial dissents and concurrences/dissents to identify, interpret, collate, and place in the correct column of either "pro-criminal" or "not pro-criminal" each individual issue addressed by each individual Justice.
And I remain singularly astonished at this statement:
"I think [it] may be more likely [that Justice Butler overturned] something around 40% [judgments of conviction]."
You "think"? "More likely?" Where is your supporting data?
Contrary to the assertions of Illusory Tenant and other members of the defense bar, academics calculate numbers like these all the time for various purposes.
Of course they don't. That's just absurd. Unless by "numbers like these", you just mean "58", "40", and "25", which admittedly do figure in academic work from time to time...
But if you really mean that respectable, intellectually serious academics use numbers so derived, then it's an intriguing question just who you've been reading, and who you consider an academic. McBride, one supposes, is the only "academic" you could have in mind.
William Tyroler correctly points out some important and, let's be honest, utterly basic data-analysis problems with the McBresenberg assertions. Of course a conclusion follows quite naturally, as it has all along. When you find someone massaging the data using misrepresentative, ill-defined, arbitrary measures, the immediate question is why they're using those misrepresentative, ill-defined, arbitrary measures out of the limitless other lousy measures they could have used. If their innumeracy/arbitrariness is generating the conclusion they transparently want, though, you generally have your answer right there.
And so it goes in this case.
Oh just lighten up - everyone.
I for one am glad it is over.
Rick do you still live here?
Of course they don't. That's just absurd. Unless by "numbers like these", you just mean "58", "40", and "25", which admittedly do figure in academic work from time to time...
I gave you an example. Do you want more?
Attorney Tyroler's points are fair but I acknowledged and responded to them in the course of these posts. You could make a judgment about whether a ruling is "important enough" or you could adopt a different method of analyzing the data. Overturning and upholding convictions is a candidate, albeit not a comprehensice one. But if you did it, you'd have to do it right and the Butler campaign did not.
Doing it the way that I suggests has the advantage of objectivity and consistency. It will occasionally require coding as"against defendant" some cases that are arguably or even clearly "for defendant." That's where McBride and I disagreed. I see her point but I am trying to avoid the problem suggested by Brother Tyroler. You'd code his hypos "against defendant" because the defendant didn't get relief.
To put it mildly, it strains credulity to expect anyone to believe, that someone, in a day or two, has objectively sifted through each decision and determined from the various majority opinions, partial dissents and concurrences/dissents to identify, interpret, collate, and place in the correct column of either "pro-criminal" or "not pro-criminal" each individual issue addressed by each individual Justice.
That is what she says she did (and I have no idea how long it took), although she made the task easier by coding by case rather than issue. This enabled her to look for whether the defendant won anything and then see who joined in that. That's why she counts Young as "for defendant." You can argue that this overstates the individual "pro defendant" number but it should, over the run of cases, even it self among the members of the court. Since that's what I'm interested in, I don't much care about the 58% number. After my adjustment of it's a correct statement of what it is supposed to measure (there are no mischaracterizations of the law or shifting definitions), but, for the 97th time, taken alone, it doesn't mean much.
Nope, we sure can't do with those pesky details and nuance in constitutional law.
Right. I spend half of my waking hours talking about such stuff.
IT wrote: You "think"? "More likely?" Where is your supporting data?
Sound and fury, that's all there is. And it may well work.
How many people do you think will look closely enough to realize that there's a great sucking nothingness at the heart of these allegations about Butler?
What they'll see or hear is "pro-criminal", "rules for defendants", "against law enforcement", and then a bunch of numbers. What, some people dispute these numbers? They say the alleged data is borderline meaningless, ginned up to fit predetermined conclusions? Oh, well -- let the plaintive mumbling about "getting bogged down in details and nuance" (thinking is so hard) commence.
Mr Esenberg, I must have missed your example of respectable academic work that defends stark, simplistic allegations on the basis of admittedly flawed, one-dimensional analyses of complex, multi-level, subtle data points.
Could you remind me, before you trouble yourself to provide "another one"?
You "think"? "More likely?" Where is your supporting data?
Could you remind me, before you trouble yourself to provide "another one"?
Go back and read what I wrote. One of my colleagues at Marquette used voting for or against a claim of a criminal defendant to determine a judge's disposition on criminal justice matters and then looking to see whether it changed as elections approached, etc.
A recent study by a law prof at Tulane looked at how campaign contributions affect judicial decisionmaking. Because he needed a baseline, he calculated the extent to which a judge had ruled in favor or against plaintiffs.
I have granted that this is simple and doesn't capture everything. But it is one way to get at a big picture and, as I would expect, not one person here has actually suggested that the big picture that the data points to, e.g., thde differences among the justices on criminal justice issues is wrong.
As for IT's point, I explained that I spot checked Butler's conviction analysis and found a significant number of mistakes, e.g, counting Jerrell CJ and Richard Brown as upholding convictions. The mistakes that I found gets it to a but under 40% or over, depending on what you do with them.
As the Estimable Professor knows, once "law" becomes separated from common sense, "law" will have no persuasion in daily affairs.
While details and nuance are important for SOME purposes, attorneys and judges who specialize in ever-more-exotic pursuits of same risk making Law a laughingstock.
Butler and Shirley have succeeded in that quest in both the casino gambling and self-defense arenas; as a result, both the Justices and the "law" they wrote are laughingstocks.
That disconnect, IT, is serious, and your supercilious remarks are foolish at best.
In the real world, no Justice is "pro-crmininal."
Activist bloggers are making this nonsense up.
Gableman is a front man,
Activists on the far-right are trying to make him into a legit candidate.
It's like putting a rolled up sock down your jeans before you hit the bars.
Further to attorney Tyroler comments, stratifying the data might be useful but I'd think you wouldn't do it by convictions but rather by some categorization of the issues presented.
stratifying the data might be useful but I'd think you wouldn't do it by convictions but rather by some categorization of the issues presented
Perhaps. As I labored (not well) to say, I think that would be 1) a worthy academic pursuit but 2) ultimately doomed to failure. If that's so, then conviction rate wins by default. Among myriad problems, I think at some point you have to distinguish between severity of underlyng crime (say, simple possession of marijuana and intentional homicide) let alone type of issue (mere procedural issue vs. reversal of conviction) if you want to measure differential justices' views on "public safety." I doubt it can be done, but I'm a bit surprised no one has tried.
The most demonstrably sensible poster is Reddess (6:19 a.m.): she should probably be tasked to design the study.
Thank you w.t.
Herself has to go have a Harp after this one.
Post a Comment