Eric J. Fuchs
How Bexar County prosecutors used data analytics during jury selection to secure a stiff prison sentence for a first-time offender in a double intoxication manslaughter trial
Taylor Rosenbusch was a 19-year-old culinary student when her late night of partying resulted in a deadly head-on collision in the early morning hours of Mother’s Day 2011. As Rosenbusch, in her silver Jeep SUV, began driving the wrong way down IH-35, Keith Hernandez, age 23, and Tony Morin, 45, were carpooling to work at the Wal-Mart Distribution Center in New Braunfels. At approximately 4:00 a.m., Rosenbusch was traveling southbound in the northbound lanes, directly in their path. The two vehicles crashed head-on at full speed. There was no time to react; neither car braked. Keith and Tony were both pronounced dead at the scene. Rosenbusch’s blood alcohol concentration one hour after the crash was 0.26.
Her guilt was not at issue.
After the crash, Rosenbusch was immediately remorseful. One police detective who made the scene testified that her remorse was more sincere than the other defendants he had seen in similar situations. He even stayed in touch with her over the 2½ years the case was pending and testified that she was a troubled young person who just needed help. Rosenbusch tried to commit suicide while awaiting trial, an event she graphically described to jurors as a response to the grief and remorse she felt.
Young, pretty, and sympathetic, she had no criminal history. She hired experienced, skillful attorneys. The defense put on evidence that Rosenbusch had been sexually abused as a child, was an alcoholic, and had been attending counseling. She showed the jury the deep scarring on her arms from her suicide attempt. She was asking the jury for probation.
But we did not feel that was appropriate, and the victims’ families wanted a stiff prison sentence.
With the history of Bexar County juries’ willingness to give probation on similar cases, fellow prosecutor Clayton Haden and I knew that the outcome of the punishment trial would likely be decided by the makeup of the jury. So we wanted as favorable a jury as possible. When it came time to make our peremptory strikes, we wanted specific, calculable information on every juror—we did not want strike decisions left to educated guesses based on singular comments, body language, and gut feel. To achieve that, we needed more information than a typical jury selection could provide.
Adding an analytical element to jury selection solved this problem. Using the strategy outlined below, we generated comparative data and information for every juror on the panel, which enabled us to evaluate individual jurors against the rest of the panel in a balanced, impartial way. Armed with this information, we could eliminate the least favorable jurors from the panel and thus maximize the possibility of a stiff prison sentence.
Generating usable data
To make analytical decisions, we needed data to analyze. To accomplish this in the course of a criminal trial, we used a series of “scaled” questions—questions that sought to identify where on a pre-determined scale a juror falls on a specific topic.
The most difficult part in using scaled questions is in crafting solid questions. We wanted probative questions that addressed the central issues in the case. Because Rosenbusch’s case boiled down to punishment,1 the questions needed to provide information on the broad spectrum of punishment considerations that might arise. I also wanted to confront issues the defense would likely raise, such as the defendant’s lack of intent, lack of criminal history, and youth. And I wanted to lay the foundation for some of our punishment arguments—deterrence and the impact on the victim’s families—and make it clear we would be seeking prison time.
We also had to be careful not to phrase any scaled question as an improper commitment question.2 Texas courts have held a question is proper if it seeks to discover a juror’s views on an issue applicable to the case,3 and litigants are given “[broad] latitude” to “inquire into a prospective juror’s general philosophical outlook on the justice system.”4 So our scaled questions were not crafted to challenge any prospective juror for cause; rather, they were merely about gathering information.
We whittled down and refined our questions until we settled on six of them (I’ll walk through them below). Taking the information from the six questions as a whole gave us a fairly good understanding of each juror’s philosophical outlook on punishment. Knowing this information was critical to making intelligent strikes and ending up with as favorable a jury for the State as possible.
You will notice that the answers to each of the questions are calculated to place each juror on our pre-determined scale. For all of the six questions, Answer No. 1 is least favorable to the State. Answers No. 2 and 3 are incrementally more favorable to the prosecution, and Answer No. 4 is the most favorable. Each answer was worth the same number of points as its corresponding numerical value (answer No. 2 was worth two points, etc.), which was crucial for generating numerical data we could analyze.
The scaled questions
Question 1
A jury’s punishment verdict in an intoxication manslaughter case can change behavior in the community.
1. Strongly disagree. It will have no effect on the community.
2. Disagree. Punishment is only about the defendant on trial.
3. Agree. When the public sees the punishment, they might make different decisions.
4. Strongly agree. Strong verdicts deter future crime.
The first scaled question addressed deterrence. One of our strongest arguments that Rosenbusch should be sent to prison, despite her lack of criminal history, was deterrence. We wanted to plant the seed with jurors that they could change future behavior (and thus save lives) by sentencing someone to prison for intoxication manslaughter, even if the defendant had no prior criminal history. Not everyone will agree with this premise, but those who do would be more favorable to the State and more likely be persuaded when we later made that argument in court. Most importantly, we wanted to know up-front which jurors did not agree with this proposition so we could strike them.
Question 2
The main purpose of sentencing for intoxication manslaughter is:
1. Rehabilitation. Everyone makes mistakes; the defendant will change.
2. Restitution. Helping the victim recover.
3. Deterrence. We want people to know they can’t do this in Bexar County.
4. Punishment. The defendant harmed someone so she must be punished.
The second question gauged jurors’ general philosophical outlook on punishment in intoxication manslaughter cases. It was drafted with an eye toward identifying those jurors who might, in general, take a softer position on punishment. For this case, jurors who leaned toward rehabilitation were less favorable to the outcome we felt appropriate than those who believed deterrence and punishment were the primary purposes of sentencing.
Question 3
How do you feel about assessing a lengthy prison sentence to a first-time offender?
1. Very uncomfortable. Everyone deserves a second chance.
2. Uncomfortable. If it was a one-time mistake, we can rehabilitate.
3. Comfortable. If the facts of the case support it.
4. Very comfortable. Do the crime, do the time.
The third scaled question dealt with the prospect of sending a first-time offender to prison. This question clarified to the panel early on that we would be seeking a lengthy prison sentence so that no juror would be surprised later. And, by discussing it early, we hoped to see how comfortable or uncomfortable jurors were with the principle generally. We thought that the more we talked about prison in the context of a first-time offender, the more comfortable the jurors would be assessing a prison sentence. And by talking about a lengthy prison sentence, rather than just a prison sentence generally, we hoped to establish that probation was not appropriate in this case, thus confining future deliberations to purely a discussion of how much prison time was appropriate.
Question 4
The most important factor in determining the appropriate punishment is:
1. The defendant’s age and actions since the crime.
2. The defendant’s criminal history.
3. The seriousness of the crime.
4. The injury the crime caused.
The fourth scaled question was similar to the second (just framed a little differently) in that it sought to explore jurors’ general philosophical outlooks regarding punishment. This time we used examples we expected the defense to raise from Rosenbusch’s own situation (her age, actions since the crime, and lack of criminal history) as the answers less favorable to our desired outcome. Jurors who thought her youth and lack of criminal history were most important were less favorable to us than jurors who focused on the crime itself.
Question 5
A defendant who is remorseful should be punished less severely.
1. Strongly agree. Pain of guilt is punishment enough.
2. Agree. Remorse should be considered more than the crime committed.
3. Disagree. Remorse is good, but it doesn’t change what happened.
4. Strongly disagree. Everyone is sorry afterward.
The fifth scaled question dealt with remorse head-on. We knew the defense would likely point out Rosenbusch’s remorse to win sympathy from the jury and have jurors relate to her situation. Because defendants in intoxication manslaughter cases do not set out and intend to kill someone, they are frequently remorseful after the fact. To confront this reality, we asked jurors about the compelling role that remorse sometimes plays in criminal trials. Rather than waiting for this sympathy to play the role it naturally would in jurors’ later deliberations, we gathered information about the jurors’ views up-front.
Question 6
The importance of victims when assessing punishment is:
1. Not important. Only the defendant’s actions and past matter.
2. Slightly important. Victims matter, but defendants matter more.
3. Important. Victims matter more than defendants.
4. Very important. The harm caused is the main consideration.
The last question addressed the role that victims play in punishment. Two people were killed. Two families were destroyed. The effect on these victims was going to be a big part of our punishment case and argument for a lengthy prison sentence. We wanted to know up-front what role the jurors generally felt this type of evidence would play.
Analyze the data before trial
Once the questions were drafted, I devoted some time well before trial to developing and pre-analyzing the scoring system we would use. In Bexar County, most district courts will generally give each side less than 15 minutes to make strikes, so we needed the ability to make effective use of the analytics quickly and easily.
Essentially, I set a minimum “score” (18 in this case) that a favorable State’s juror would reach after answering all six questions—these jurors would be marked green. Jurors who scored 15 or less were considered defense-leaning and were marked red. I then defined what responses and combination of responses would indicate jurors who were less favorable to us, regardless of total score—these jurors would also be marked red. (There were a number of additional layers of analysis that went into generating this scoring key that were too lengthy to discuss in the print version of the journal; they are discussed at length in an extended version of this article that is attached below as a Word document.) With the “thinking” already done, the score sheet would just need to be added up and the answers color-coded for quick later use.
There are two ways to efficiently analyze the jury selection sheet in trial: 1) have a trial partner or a third party handle the scoring and analytics manually, or 2) have a spreadsheet ready to go and let Excel do the computing for you.5 Both operate off the same principles; the only difference is whether the computing and analytics is done by hand or automatically. Recording the information by computer allows you to have the information instantly and makes it easy to add additional layers of analysis. But hand-computation works as well. Regardless of which you choose, you will want help. Someone else should do the input while you devote attention to the jury pool.
In Rosenbusch’s case, we had a very skilled intern (and now full-fledged prosecutor in our office), Fidel Rodriguez, enter the numbers into a laptop as the jurors called out their responses. Clayton recorded the responses by hand. I have found through experience that it is best to have two people recording the responses in case either one misses a response.
Practical considerations at trial
We were given one hour to conduct general voir dire. We do not generally do juror questionnaires in Bexar County, but if you are in a jurisdiction that does, consider using scaled questions there. I displayed the scaled questions on PowerPoint slides, scattered within the context of the rest of my jury selection.
Before we started, the defense, having done their homework and knowing what was coming, asked the judge to see and be provided a copy of our PowerPoint slides. I had no problem showing them the slides so they could make objections,6 but I did not want to provide a copy. The court agreed that I did not have to provide a copy of the slides. The defense objected to many of the slides for varying reasons—and the court granted a few objections to some images I had planned to use—but all of the scaled questions remained.
As I conducted my voir dire, I explained to the panelists that because there were 60 of them, I had only one minute to talk to each person. I did not want people to feel compartmentalized or reduced to a number, but that was simply not enough time to know much about them. In that context, when I got to the first scaled question, I explained that I would be asking some general questions to the group and getting quick individual responses from every person. I told them to answer each question as honestly as possible, giving me only the number (1, 2, 3, or 4) of their answer choice.7 As I worked through the panel on the first scaled question, I asked Juror No. 15 what answer Juror No. 9 had provided. Like the rest of the panel, Juror No. 15 had no clue. It was a simple and effective way to point out that no one was paying attention to other jurors’ answers, so they should answer honestly, even if they chose a less common choice. By doing this, I hoped to ensure the reliability of the data. It took two to three minutes per question to get through the jurors’ responses, leaving plenty of time for the rest of my voir dire.
Although asking these types of questions may feel awkward at first, remember that it is not awkward to the jurors—most of them have never sat through a jury selection so they don’t know what to expect. At the very least, it gets them thinking and encourages participation. In Rosenbusch’s trial, only two jurors had a problem definitively answering any of the questions—this was actually atypical.8 Those two jurors felt that they could not decide between answer choices 2 and 3 on one of the questions. I politely asked them to pick which was best, and both did. But with vacillating panelists like that, we at least knew a different and also valuable piece of information: that those particular jurors had difficulty making a decision on a pretty easy subject. These were not jurors I wanted, and neither made the jury.
After I conducted general voir dire, Fidel went to work behind the scenes. Because we had put in all the analysis work before trial, it was easy for him to print color copies of the spreadsheet for us. In prior trials before we graduated to the spreadsheet, Fidel would spend this time adding the responses and color-coding the sheet himself based on each trial’s pre-determined score sheet. This process would normally take him 15 to 20 minutes.
Whichever way you do it, the important thing is to have the analyzed sheets at your disposal before the defense finishes its general voir dire. Once the computing is complete, you can quickly identify which jurors are more favorable and which are less. We did not waste our time during specific voir dire talking to favorable jurors; we used that time to target unfavorable jurors to see if there was anything about their views that was challengeable for cause. With this approach, we eliminated several jurors through challenges for cause that we otherwise would have had to use peremptory strikes on.
In reviewing the final jury sheet in Rosenbusch’s case, that particular jury panel as a whole was less favorable to the State than other intoxication manslaughter panels I have had, meaning that the final jury likely would not be as favorable as previous juries. But the goal and purpose of the analytics is to help eliminate the least favorable jurors relative to a particular panel, so we focused on that.
When it was time to make our peremptory strikes, we trusted the data. We struck the least favorable of those who remained based on their answers to the scaled questions. Because there were no glaring gaps in our knowledge of the jurors, we were confident we had eliminated the least favorable jurors of the panel.
The defense had four lawyers for jury selection on this case. They made intelligent strikes as well. (There were no double strikes.) Afterwards, in reviewing the data for the panel as a whole, I observed that all of the jurors with relatively strong views in either direction were eliminated, which certainly isn’t a bad place to be.
We were confident that we had eliminated the jurors with philosophical punishment beliefs most contrary to the lengthy prison sentence we would be asking for. For instance, not a single juror who answered “1” to any scaled question made the jury. Only one of the 12 jurors had said he was uncomfortable at the outset assessing a lengthy prison sentence to a first-time offender. Eight of the 12 final jurors believed that their verdict could change behavior in the community. Only one juror felt that a remorseful defendant should be punished less severely. And all but one of the jurors thought that victims mattered more than defendants when deciding punishment.
Moreover, when we looked at each juror’s answers as a whole, none of them leaned unfavorably on more than two scaled questions, meaning they each gave favorable answers on at least four of the questions. Six of them reached our pre-defined metric to indicate they would be strong State’s jurors based on their responses. We knew we had a chance.
Analysis of the analysis
As expected, Rosenbusch pleaded guilty at the onset of trial, and we proceeded immediately into the punishment phase. After four days in court, the jury sentenced her to 12 years in the penitentiary on each case and made an affirmative finding of a deadly weapon. The court then stacked the sentences at our request.
The jury left without talking to either side, so I mailed each of them some follow-up questions with return envelopes. I received only three responses, but those responses seemed to validate the data we had on each of those jurors from jury selection. Two jurors said that when deliberations first started, they had a sentence of 20 years on each case in mind; the jury selection data had indicated they were both strong State jurors. The third juror told us she first had a sentence of six years on each case in mind; the data had indicated she was a middle-of-the-road juror and had been the last juror we left on.
Was the use of data analytics the difference in this case? It’s hard to tell, as there is no way to truly know what ultimately decides any trial—and I would like to think we did some other things effectively in the trial as well. Or maybe it was just the facts. But what I do know is that we eliminated the least favorable jurors relative to the rest of that particular panel, which minimized the risk of an outcome unfavorable to the State. We relied on targeted, comprehensive, and objective data to make our strikes. And ultimately, jurors assessed a stiff prison sentence on a defendant with no criminal history. For the victims, Keith and Tony, and their families, this was justice.
Broad application
Though this system was used on an intoxication manslaughter punishment case in this instance, data analytics can be a valuable addition to jury selection in any type of case. When crafting your trial strategy, evaluate the problem areas in the case and define your goals well before the start of the trial. You can then use scaled questions during jury selection to target those goals. Is it a case where a real issue exists with guilt or innocence? Or are you just attempting to maximize punishment? Either way, you can then draft questions accordingly to address the weaknesses and central issues in a particular case.
Asking scaled questions will feel different at first, but utilizing them in jury selection provides a multitude of benefits. You will generate cognitive data about the jurors sitting before you. Based on that data, you can run simple analytics to identify the most favorable and least favorable jurors to your position. You can then attempt to eliminate as many of the less favorable jurors as possible through challenges for cause. You will then exercise peremptory strikes in an objective, effective way. Finally, you can analyze the final jury as a whole to help shape and define the arguments in your case.9 By consistently eliminating the least favorable jurors to the prosecution, and thus minimizing risk, you will inevitably enjoy more favorable trial results.
More information
Considerably more thought went into the analysis and presentation of our scaled questions than could be explained in the print version of this article. Check out the Word document below for a more comprehensive guide on how to incorporate scaled questions into future jury selections.
Endnotes
1 We expected Taylor Rosenbusch to plead guilty based on the evidence and defense counsel’s pre-trial representations. But even if she had not, we were comfortable enough with proving guilt that we planned to spend the majority of voir dire on punishment considerations either way.
2 See Standefer v. State, 59 S.W.3d 177, 179-184 (Tex. Crim. App. 2001).
3 Sells v. State, 121 S.W.3d 748, 756 (Tex. Crim. App. 2003) citing Barajas v. State, 93 S.W.3d 36, 2002 Tex. Crim. App. LEXIS 140, No. 415-99, slip op. at 3 (Tex. Crim. App. 2002).
4 Sells, 121 S.W.3d at 756 n.22.
5 I am not an expert on Excel, but setting up a basic spreadsheet to do the level of computing required for the analytics utilized here is relatively easy. For those unfamiliar with Excel, there are many websites available to walk through how to set up certain formulas and conditional formatting.
6 I figure it is better to let the defense object up-front and obtain rulings then (so you can know how to adapt) than have your rhythm interrupted later.
7 If you want to stay on the good side of your court reporter, give her a heads-up beforehand that you are going to employ this tactic. All of the answers are challenging for them to take down, and if you want a clean record, I have found that different court reporters have different preferences for how you call out the numbers. Usually, I say, “Juror 1” and then let the juror say his answer number. Then “Juror 2,” “Juror 3,” and so on so that I have identified them for the record. You can also control the speed of the incoming answers somewhat by operating this way. Developing a clear record will also provide supporting evidence should any challenges arise to your peremptory strikes.
8 If jurors truly cannot decide between two answer choices, I will split the difference, such as 2.5 for a juror who cannot decide between answer No. 2 and answer No. 3.
9 This was not addressed in this article, but it is covered in the fuller version available below as a Word document.