From Casetext: Smarter Legal Research

United States v. Garcia

United States District Court, District of New Mexico
Oct 3, 2024
No. 22-CR-1171-JCH (D.N.M. Oct. 3, 2024)

Opinion

22-CR-1171-JCH

10-03-2024

UNITED STATES OF AMERICA, Plaintiff, v. ADRIAN GARCIA, Defendant.


MEMORANDUM OPINION AND ORDER

Before the Court is Defendant Adrian Garcia's Motion to Exclude Trial Witnesses for Federal Rule of Criminal Procedure 16 Violations (ECF No. 61). Defendant moves the Court under Rule 16 to exclude testimony from FBI Firearms and Toolmark Examiner Erich Smith and from any Albuquerque Police Department (“APD”) officers who wrote a report that has not yet been disclosed. Additionally, Defendant requested an evidentiary hearing under Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993). After full briefing, this Court held an evidentiary hearing on the motion on March 26, August 28, and August 29, 2024. This Court heard testimony from the Government's proposed expert, Erich Smith, and from Dr. Michael J. Salyards, who provided testimony for the defense on firearms and toolmark validation research studies. Having considered the motion, briefs, evidence, and applicable law, the Court will grant in part and deny in part Defendant's motion. The Court will permit Mr. Smith to testify as an expert in the field of firearm and toolmark analysis and to give the opinion set forth in his report, including the grounds and reasoning in support. The Court, however, will grant the motion in part by limiting the scope of Mr. Smith's testimony to the standards set forth in the United States Department of Justice Uniform Language for Testimony and Reports for the Forensic Firearms/Toolmarks Discipline Pattern Examination, which the Court finds excludes testimony that there is a “negligible” possibility that there could be another gun out there that produces the same patterns. Finally, the Court will deny as unripe Defendant's request to exclude reports made by any APD officer whose report has yet to be disclosed.

I. FACTUAL BACKGROUND

Defendant Adrian Garcia is charged in a three-count indictment: (Count 1) Carjacking in violation of 18 U.S.C. § 2119(1); (Count 2) Using and Carrying a Firearm During and in Relation to a Crime of Violence, and Possessing a Firearm in Furtherance of such Crime, and Discharging said Firearm in violation of 18 U.S.C. § 924(c)(1)(A)(iii); and (Count 3) being a Felon in Possession of a Firearm and Ammunition in violation of 18 U.S.C. §§ 922(g)(1) and 924. (See Indictment, ECF No. 4.) The charges arise from an alleged carjacking that occurred on May 22, 2022. (Id.) According to the Government, it will present evidence at trial that the offender possessed a firearm; he discharged the firearm in furtherance of the carjacking; and as he fled the scene, he discarded the firearm, which police recovered. (See Gov.'s Resp. 2, ECF No. 80.)

A. Notice of Proposed Expert Testimony of Erich D. Smith

The United States filed a Notice of Expert Testimony asking the Court to find that the proposed testimony of Firearms and Toolmark Examiner Erich Smith is admissible at trial. (Notice 1, ECF No. 48.) The Government stated that it provided to the defense Mr. Smith's Curriculum Vitae, listing his qualifications, publications authored in the previous 10 years, and a list of cases from the previous four years in which he testified as an expert. (Id.; see also Gov.'s Ex. 1 (Smith CV).) According to the Notice, Mr. Smith will describe the accreditation of FBI Laboratory facilities and the process of firearm and toolmark identification, including examining the firearm, using virtual comparison microscopy (“VCM”), and making microscopic comparisons between the cartridge cases and the known test fires to see whether the cartridges were fired from the same firearm. (Id. at 1-3.) Mr. Smith's opinion in this case is that, based on his examination of the pistol and cartridge casings recovered on the date of Defendant's arrest, the spent cartridge case recovered from the scene of the carjacking was fired from the firearm that was recovered by police. (See id. at 2.) Finally, Mr. Smith will testify that the cartridge cases were searched in NIBIN (the National Integrated Ballistic Information Network) and there was a NIBIN match. (Id. at 4.)

NIBIN is an ATF database of three-dimensional digital ballistic images of spent shell casings recovered from crime scenes and crime gun test-fires that can automatically generate a list of potential matches. United States v. Hunt, 63 F.4th 1229, 1239 (10th Cir. 2023).

B. Procedural History and Timing of the Government's Expert Disclosures

Magistrate Judge B. Paul Briones entered a discovery order on August 25, 2022, in which Defendant was deemed to have requested discovery, giving the Government 14 days to provide Defendant with Rule 16 discovery. (Am. Order 1-2, ECF No. 21.) The Order said the Court would set a date before trial by which expert disclosures were due. (Id. at 2.)

After multiple defense motions to continue trial, (see, e.g., Def.'s Mots. to Continue, ECF Nos. 22, 24, 26), as relevant here, in November 2023, the Court set trial for February 20, 2024, (Order, ECF No. 36.) On November 15, 2023, the Government requested a firearm and toolmark analysis. (Def.'s Ex. G, ECF No. 61-7.) Mr. Smith finished his report on December 28, 2023, (Def.'s Ex. E, ECF No. 61-5), and sent the prosecution notes and records pertaining to his report around January 9, 2024, (see Def.'s Ex. H, ECF No. 61-8).

After Defendant filed an unopposed motion to continue trial (ECF No. 37), two motions to suppress (ECF Nos. 38 and 39), and a motion to dismiss (ECF No. 40), this Court set trial for March 25, 2024, and set a deadline for the disclosure of expert testimony and reports on February 9, 2024, (Order, ECF No. 41). On January 31, 2023, the defense emailed the Government a disclosure request letter, purportedly asking for, among other things, all arrest reports from arresting officers and expert disclosures, including all records of examinations or tests. (Def.'s Ex. D, ECF No. 61-4; Def.'s Mot. 5, ECF No. 61.)

On February 9, 2024, the Government filed the Notice (ECF No. 48) regarding Mr. Smith's testimony. The same day, the Government disclosed a three-page laboratory report dated March 14, 2023, regarding the NIBIN lead and Mr. Smith's December 28, 2023, laboratory report. (Def.'s Ex. E, ECF No. 61-5.) The report included the results of the examination: “The Item 5 cartridge case was identified as having been fired in the Item 2 pistol.” (Id. at 1.) On February 16, 2024, the defense requested from the Government the underlying analyses and photographs for the reports. (Def.'s Ex. F, ECF No. 61-6.) Five days later, the defense received in discovery 42 media files and over a thousand pages in expert materials. (Def.'s Mot. 7, ECF No. 61.)

C. Rule 16 Motion and Subsequent Procedural History

Defendant filed his Motion to Exclude Trial Witnesses for Federal Rule of Criminal Procedure 16 Violations (ECF No. 61) on February 27, 2024. Defendant asserts that the Government belatedly requested its microscopic toolmark analysis from Mr. Smith, and that despite his completion of his report on December 28, 2023, the Government did not disclose to the defense all the materials underlying the report until February 21, 2024, shortly before the March 25, 2024, trial date. Defendant argues the Government violated Rule 16 because he did not receive the needed expert notes, evidence, and summaries until six days before Daubert motions were due. Additionally, Defendant argues that the disclosures are incomplete because the report does not contain what toolmarks Mr. Smith used to form his opinion, so the bases and reasons are absent.

This Court subsequently continued the March 25, 2024, trial because of the numerous motions filed in the case that required a hearing. (Order 1, ECF No. 75.) The Court held a Daubert hearing on the pending motion on March 26, 2024, but it held the hearing open to consider Defendant's Motion for Defense Expert at the Daubert Hearing (ECF No. 94), filed the same day as the hearing. (See 3/26/24 Hr'g Tr. 214:9-216:14.) Defendant requested time to retain his own expert, Dr. Michael J. Salyards, a forensic research scientist, to provide testimony on the scientific toolmark validation research studies that Mr. Smith relied on in the hearing and to explain why current validation studies could be masking higher false positive error rates. (See Order 1, ECF No. 102.) The Court subsequently granted the motion to allow Dr. Salyards to testify, (Order, ECF No. 102), and set the continuation of the hearing for August 28, 2024, based on the availability of the witnesses, (Order 1, ECF No. 107). Trial is currently set for October 15, 2024. (Order, ECF No. 122.)

D. Erich Smith's Testimony at the Daubert Hearing Regarding his Methodology and Opinion

At the Daubert hearing, Mr. Smith testified about how firearm manufacturing tools used to produce a firearm cause the firearm to make marks on bullets and cartridge cases. (See 3/26/24 Hr'g Tr. 36:2-54:7, ECF No. 99; Gov.'s Ex. 15 at 10.) Toolmarks are the alteration of the surface topography of an item created by the forceful contact with a harder surface called a tool. (Gov.'s Ex. 15 at 6.) Examiners in this field look at certain class characteristics: for cartridge cases, the caliber, type of breech face marks, and firing pin impression; and for bullets, the caliber/diameter, number and direction of lands/grooves, widths of lands/grooves, and weight. (See Gov.'s Ex. 15 at 8; 3/26/24 Hr'g Tr. 45:11-47:22.) Examiners also look at individual characteristics and subclass characteristics. (See 3/26/24 Hr'g Tr. 48:3-57:17.) Individual characteristics are the marks produced by the random imperfections or irregularities of tool surfaces, viewed at the microscopic level, which are caused by the manufacturing process and/or from use and wear, damage, abuse, deterioration, and corrosion. (See id. at 48:3-56:3; Gov.'s Ex. 15 at 9, 13.). Subclass characteristics are a subset of a class that occur in certain types of manufacturing processes where they are consistent among items made by the same tool in the same approximate state of wear, and which may represent the profile of the tool. (See 3/26/24 Hr'g Tr. 56:6-57:15, ECF No. 99; Gov.'s Ex. 15 at 14.) Studies published in the field support the idea that the individual characteristics are unique to each firearm. (See Gov.'s Ex. 5 at 1.)

Mr. Smith uses the E3CV methodology: evaluation, classification, comparison, conclusion, and verification. (3/26/24 Hr'g Tr. 58:5-13.) The method is memorialized on the Association of Firearm and Toolmark Examiners (“AFTE”) website based on documents by the Scientific Working Group for Firearms and Toolmark Identification (“SWGGUN”). (Id. at 58:16-23; Gov.'s Ex. 15 at 30.) For the evaluation step, Mr. Smith evaluates an item to determine if it has class or individual characteristics, and if it does, it will move through the examination process. (See 3/26/24 Hr'g Tr. 58:10-59:15, 61:15-62:7.)

The next step, classification, involves determining if there is a difference in class characteristics between the evidence cartridge and the test cartridge. (See id. at 58:10-59:8, 144:724.) For bullets, class characteristics include caliber/diameter of the bullet and the dimensions, number, and direction of twists of lands and grooves. (See id. at 62:8-64:1; Gov.'s Ex. 15 at 17.) For cartridge cases, Mr. Smith examines class characteristics based on the caliber, the firing pin shape, breech face marks configuration from machining, and the relative position of the extractor and ejector marks. (3/26/24 Hr'g Tr. 64:3-65:8; Gov.'s Ex. 15 at 18.) If there is a difference, he eliminates the items as a match. (See 3/26/24 Hr'g Tr. 59:14-15.) If there is a similar class between two items, he moves on to the comparison step. (Id. at 59:7-8.) He compares the individual characteristics using the AFTE theory to decide whether there is sufficient agreement. (See id. at 59:7-61:7, 65:18-67:1.) Individual characteristics include the width of land impressions, congruency between striations in quantity, and congruency in quality as to the peaks and valleys. (See id. at 72:1-14.) The “AFTE theory is the standard that relates to decisions that can be made for identification when comparing two toolmarks.” (Id. at 82:1-3.)

He uses two types of specialized microscopes to examine the individual characteristics: light comparison microscopy (“LCM”) and virtual comparison microscopy (“VCM”), both of which will scan the surface of the two items and display them side-by-side on the computer monitor. (See id. at 65:25-67:16.) VCM is the more advanced technology. (Id. at 67:14-69:22.) A technician in the FBI lab conducts the scans, not the examiner. (Id. at 141:16-142:11.) Cadre, the 3D program used by the FBI, captures aperture shear marks and part of the breech face, but is not validated for the firing pin impressions. (See id. at 142:17-144:3.) In this case, Mr. Smith used both the LCM tool and the VCM tool, and the result was the same with both tools. (Id. at 68:1624.) He did not conduct the scans himself. (Id. at 142:12-16.)

Under the AFTE theory, for two items to have “sufficient agreement” in individual characteristics, two conditions must be present. (Id. at 70:6-18; Gov.'s Ex. 15 at 25-26.) First, the pattern agreement has to be significant: the agreement in individual characteristics must be better than what the examiner has been trained to understand is the best-known non-match. (See 3/26/24 Hr'g Tr. 70:16-71:1, 82:22-83:4; Gov.'s Ex. 15 at 26.) Second, the agreement must be consistent with the similarity an examiner would expect had the two bullets been fired from the same gun. (3/26/24 Hr'g Tr. 70:16-71:1, 82:22-83:4; Gov.'s Ex. 15 at 26.) The best-known, non-match is the worst-case scenario where the similarity is high, but the specimens are from two different sources. (3/26/24 Hr'g Tr. 83:12-17.)

The conclusion, or identification, stage follows comparison. (See Gov.'s Ex. 15 at 20-21.) If there is sufficient agreement between two specimens, Mr. Smith will make an identification. (See 3/26/24 Hr'g Tr. 70:1-10.) If either of the two conditions for the “sufficient agreement” standard are not met, so he cannot make an identification, but there are no differences in class characteristics to eliminate it, he will make an inconclusive determination. (See id. at 74:10-20.) Inconclusive conclusions occur when the data for analysis is incomplete or missing from a lack of toolmark reproduction and would at best be a guess. (Gov.'s Ex. 15 at 23.)

The final step is verification in which a second examiner in the lab looks at the evidence. (3/26/24 Hr'g Tr. 86:6-88:1.) For a blind verification, which occurred in this case, Mr. Smith turned over his notes to his unit chief, the unit chief randomly assigned another examiner to look at the evidence, and then the two notes were brought together to see whether they reached the same conclusion. (See id.) The second examiner did not know Mr. Smith's original conclusion. (See id.; Gov.'s Ex. 15 at 28.) In his lab, all identifications undergo verification, and then eliminations and inconclusive results may be randomly selected for verification by the unit chief. (See 3/26/24 Hr'g Tr. 88:17-89:4.)

In this case, Mr. Smith received two submissions at different times. (Id. at 90:4-5.) The first submission had a firearm. (Id. at 90:7.) He examined it for functionality, test-fired it, and then put the test-fire into NIBIN to see if the firearm was linked to any unresolved shootings. (Id. at 90:7-11, 151:20-22.) NIBIN uses an algorithm to look at similarities and patterns. (Id. at 151:2324.) Mr. Smith learned that NIBIN locked in on Item 5, a cartridge case. (See id. at 151:20-152:14.)

The second submission occurred when the gun came to him with the two test-fires he produced the first time as well as the cartridge case that he identified in the NIBIN system. (See id. at 90:14-16, 151:20-153:15.) After using the E3CV methodology and examining class and individual characteristics, he concluded that the Item 5 cartridge case (the cartridge case from the alleged crimes) was identified as having been fired from the Item 2 pistol. (See id. at 89:10-94:15, 154:16-155:2.) A second examiner, using a blind verification procedure and traditional LCM, came to the same conclusion as Mr. Smith. (Id. at 93:23-94:15.)

II. ANALYSIS

The Court will first consider the Daubert issues before turning to whether the Government's disclosures violated Rule 16.

A. Daubert analysis

Federal Rule of Evidence 702 governs the admissibility of expert testimony. Fed.R.Evid. 702. A witness, qualified by knowledge, skill, experience, training, or education, may offer an opinion “if the proponent demonstrates to the court that it is more likely than not that” the following conditions are met:

(a) the expert's scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;
(b) the testimony is based on sufficient facts or data;
(c) the testimony is the product of reliable principles and methods; and
(d) the expert's opinion reflects a reliable application of the principles and methods to the facts of the case.
Id. Rule 702 incorporates the principles of Daubert, 509 U.S. 579, and Kumho Tire Co., Ltd. v. Carmichael, 526 U.S. 137 (1999), to ensure that proffered expert testimony, even non-scientific and experience-based expert testimony, is both relevant and reliable. Fed.R.Evid. 702, 2000 Amendments. The focus “must be solely on principles and methodology, not on the conclusions that they generate.” Daubert, 509 U.S. at 595.

To determine whether an expert opinion is admissible, the court performs the following two-step analysis: (1) the court must determine whether the expert is qualified by knowledge, skill, experience, training, or education to render an opinion, and (2) if the expert is so qualified, the court must determine whether the expert's opinion is reliable under the principles set forth in Daubert. 103 Investors I, L.P. v. Square D Co., 470 F.3d 985, 990 (10th Cir. 2006). Daubert's general holding setting forth the judge's gate-keeping obligation applies not only to testimony based on scientific knowledge, but also to testimony based on technical or specialized knowledge. Kumho Tire, 526 U.S. at 141. Daubert thus covers expert testimony that is not purely scientific. United States v. Medina-Copete, 757 F.3d 1092, 1101 (10th Cir. 2014). Courts should ensure “that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.” Id. (quoting Kumho Tire, 526 U.S. at 152). The proponent of the expert bears the burden by a preponderance of the evidence to establish that the requirements for admissibility have been met. See United States v. Nacchio, 555 F.3d 1234, 1241, 1251 (10th Cir. 2009).

1. Mr. Smith is qualified as an expert in the field of firearms and toolmark analysis.

Mr. Smith has a bachelor-of-science degree in biology and a master's degree in forensic science. (3/26/24 Hr'g 18:18-21.) He has been a forensic science and toolmark examiner since 2002. (Id. at 8:5-17). To become a firearm and toolmark examiner is a two-year, hands-on training process to learn how to look at patterns and understand similarities. (See id. at 13:13-14:24.) As part of his training, Mr. Smith had to study a training manual, look at thousands of comparisons in samples, and pass a series of oral board exams and competency tests. (See id. at 13:13-17:23.) His training also included learning how to conduct identification and examination processes, visiting manufacturing facilities, and assisting real cases as a trainee. (See id. at 15:5-16:18.) Mr. Smith undergoes annual proficiency testing and has never received an unsatisfactory result since becoming a qualified examiner. (Id. at 18:1-15.) He also must complete eight hours of annual training, although he receives well more than that annually. (Id. at 15:7-9.) He has received thousands of hours of training. (Id. at 14:25-15:10.) Since joining the FBI, he has worked on thousands of firearm-and-toolmark cases. (Id. at 11:25-12:4.)

Mr. Smith is a member of the AFTE and the European Network of Forensic Science Institute; he has participated in working groups in the field; and has given numerous presentations in the field. (See 3/26/24 Hr'g 19:3-22:19.) Mr. Smith is the technical leader in his lab, overseeing operations and 13 examiners and reviewing all standard operating procedures (“SOPs”). (Id. at 8:18-9:4.) Additionally, he has served as a training program manager and a quality assurance program manager. (Id. at 9:5-17.) Mr. Smith has ten years of experience as an American Society of Crime Laboratory Directors (“ASCLD”) assessor, with specialized training to assess the quality system for the accreditation of other laboratories. (Id. at 10:11-11:24.) He has published two articles in the field, he helped design the FBI Ames 2021 Study, and he has served as a peer reviewer for articles in the field. (See id. at 23:2-27:2, 106:18-20.) He also has taught courses in the field, including graduate-level university courses, and served as a mentor for other examiners. (See id. at 32:7-33:22.) He has testified as an expert in state and federal court approximately 60 times. (Id. at 34:12-16.)

Based on the evidence, Mr. Smith is qualified as an expert in the field of firearm and toolmark analysis.

2. Mr. Smith's opinions are reliable.

The Supreme Court provided a list of specific factors in Daubert bearing on reliability that trial courts could consider in executing the gatekeeping obligation: (1) whether the theory or technique has been or can be tested; (2) whether the theory or technique has been subjected to peer review and publication; (3) the technique's known or potential rate of error; (4) the existence and maintenance of standards controlling the technique's operation; and (5) whether a particular technique or theory has gained general acceptance in the relevant scientific community. United States v. Hunt, 63 F.4th 1229, 1245-49 (10th Cir. 2023); United States v. Rodriguez-Felix, 450 F.3d 1117, 1123 (10th Cir. 2006) (citing Daubert, 509 U.S. at 593-94). These factors are not a checklist or test, but rather a guide to determine whether the expert testimony is reliable. See Kumho Tire, 526 U.S. at 150-52.

a. Whether the theory can be or has been tested

The recent history of the field of firearms and toolmark examination show efforts to research and study the accuracy, reliability, and validity of this forensic science discipline. See Hunt, 63 F.4th at 1235-37 (setting forth an overview of methodology of firearm toolmark examination and reports scrutinizing the field). The 2009 National Research Council of the National Academies of Science, created by a congressionally authorized committee of the National Academy of Sciences to study forensic-science practices, called for more studies to be performed in the discipline to understand the reliability and repeatability of the methods. See id. at 1236 (citing Nat'l Rsch. Council, Strengthening Forensic Science in the United States: A Path Forward 1-2, 22, 154 (2009) (hereinafter “NRC Report”). (See also 3/26/24 Hr'g Tr. 104:8-105:12.) The NRC Report recognized that the firearm toolmark evidence had value because class characteristics narrowed the pool of tools that may have left a distinctive mark, and individual characteristics might be distinctive enough to suggest one particular source. Hunt, 63 F.4th at 1236-37 (quoting NRC Report at 154). The NRC Report, however, “criticized the discipline's ‘lack of a precisely defined process,' stating that ‘AFTE has adopted a theory of identification, but it does not provide a specific protocol.'” Id. at 1236 (quoting NRC Report at 155).

Additional studies were conducted following the NRC Report, including one conducted by the Ames Laboratory, referenced here as the Ames I Study. See Hunt, 63 F.4th at 1237 (citing David P. Baldwin et al., A Study of False-Positive and False-Negative Error Rates in Cartridge Case Comparisons, Ames Laboratory, USDOE Technical Report # IS-5207 (2014)). (See also Gov.'s Ex. 15 at 41 (listing selected studies after 2009).) The Ames I Study “tested 218 firearm examiners by sending each of them 15 sets of four spent cartridge cases,” and asking them to determine whether the fourth cartridge case in each set came from the same firearm as the three other cartridge cases in the set. Hunt, 63 F.4th at 1237. (See also Gov.'s Ex. 15 at 41.) Excluding the inconclusive determinations, the false-positive rate in the Ames I Study was 1.52%, which may overestimate the error rate, given that most laboratories conduct peer-review analyses. Hunt, 63 F.4th at 1237.

In 2016, the President's Council of Advisors on Science and Technology issued a report (“PCAST Report”) that criticized the theory of identification used in the field as circular: “The ‘theory' states that an examiner may conclude that two items have a common origin if their marks are in ‘sufficient agreement,' where ‘sufficient agreement' is defined as the examiner being convinced that the items are extremely unlikely to have a different origin.” Id. at 1237-38 (quoting PCAST Report at 104). The PCAST Report also said that studies in the field of firearms and toolmark analysis fell short of using the criteria necessary for foundational validity, and that there was a need for more studies to measure validity and estimate reliability. Id. at 1237. Nevertheless, the PCAST Report did not call for wholesale exclusion of the evidence in courts. Id. at 1238. Instead, it recommended additional appropriately designed studies be conducted, like the black box Ames I Study. Id. at 1237-38 (citing PCAST Report at 112). (See also 3/26/24 Hr'g Tr. 105:20106:5.) The PCAST Report set forth suggestions for an appropriately designed study to include the following: sufficient sample size; samples representative of the casework in the field; an open set, pairwise study design; and unbiased organizations conducting the studies (See 8/28/24 Hr'g Tr. 41:6-45:13.)

The PCAST Report recommended one more properly designed, open, black box study to be conducted in the field and called for an error rate of less than 5% in the studies. (See 3/26/24 Hr'g Tr. 105:20-106:14; 8/28/24 Hr'g Tr. 45:25-46:7.) In a black box study (aka a validation or accuracy study) in this field, an input (bullets or cartridge cases) is given to an examiner and the examiner is asked to give a result using the AFTE theory of identification and their training and experience. (See 3/26/24 Hr'g Tr. 107:19-108:18.) The study tests the accuracy of examiners through false positive and false negative error rates. (Id. at 108:19-22.) A false positive occurs when an examiner wrongly concludes two samples were fired from the same gun when they came from different guns. (Id. at 115:1-4.) A false negative occurs when an examiner says that two samples were fired from different guns when they came from the same gun. (Id. at 115:5-7.) The open-test design preferred by the PCAST Report involves a single pair comparison containing one unknown compared to one known, where the comparison may or may not match; no inferences can be drawn from each comparison. (See id. at 114:7-22; Gov.'s Ex. 15 at 39.)

There are at least five properly designed firearm and toolmark validation studies, per the PCAST Report recommendations: (i) the Ames I Study; (ii) 2018 Keisler, Isolated Pairs Research Study (“Keisler study”); (iii) the 2020 Chapnick, et al., Results of the 3D [VCM] Error Rate (VCMER) Study for firearm forensics (“Chapnick study”); (iv) the 2021 FBI/Ames study, and (v) the 2023 Guyll et al., Validity of forensic cartridge-case comparisons study (“Guyll study”). (See 3/26/24 Hr'g Tr. 106:9-20, 116:18-121:20; 8/28/24 Hr'g Tr. 57:21-60:10, 60:11-19; Gov.'s Ex. 10.) Numerous other studies have been conducted to test the methodology, although they do not all meet the parameters for study-design recommended by the PCAST Report. (See Gov.'s Ex. 10; 3/26/24 Hr'g Tr. 106:9-17, 116:18-121:20; 8/28/24 Hr'g Tr. 57:21-60:19.)

While the AFTE theory and E3CV methodology do not use quantitative analysis in the comparison stage to reach a conclusion, they are based on pattern analysis. The publication of numerous black box studies regarding the reliability and validity of pattern analysis using the AFTE theory demonstrates that the AFTE firearms and toolmark identification theory is testable and has been tested. This factor weighs in favor of finding the methodology reliable.

b. Peer review and publication of the AFTE theory

Publication in a peer reviewed journal is “a relevant, though not dispositive, consideration in assessing the scientific validity of a particular technique or methodology on which an opinion is premised.” Daubert, 509 U.S. at 594. “The theory behind peer review is that observation leads to commentary, and commentary exposes flawed methodology.” United States v. Willock, 696 F.Supp.2d 536, 571 (D. Md. Mar. 23, 2010).

AFTE publishes a peer reviewed journal, the AFTE Journal. (See 3/26/24 Hr'g Tr. 24:412, 26:10-25; 8/29/24 Hr'g Tr. 234:9-22.) The AFTE Journal is not indexed like typical scholarly journals, limiting the reach and review by the broader academic community of the studies published therein. (See 8/28/24 Hr'g Tr. 138:10-20.) Some courts have discussed concerns with relying on the AFTE Journal's peer review process because the reviewers for the Journal were all AFTE members with a vested interest in validating their own field and methodologies, and the review was not double-blind, meaning that both an author and reviewer knew the other's identity and could contact each other during the review. See, e.g., United States v. Briscoe, 703 F.Supp.3d 1288, 1302-03 (D.N.M. Nov. 21, 2023) (quoting United States v. Shipp, 422 F.Supp.3d 762, 776 (E.D.N.Y. 2019), and United States v. Tibbs, No. 2016-CF1-19431, 2019 WL 4359486, at *9 (D.C. Super. Sept. 5, 2019)).

Despite these concerns, studies testing the AFTE theory are subject to peer review through submission and publication to the AFTE Journal. (See 3/26/24 Hr'g Tr. 24:4-12, 26:10-25.) In addition, the Journal of Forensic Sciences and Forensic Science International publish peer-reviewed articles on toolmarks and firearm identification. (See id. at 132:12-133:5.) As the record in this case shows, the AFTE theory has been the subject of numerous peer-reviewed studies on error rates and other topics in the field of toolmark analysis. Peer review is evident by the criticisms in the NRC and PCAST reports, and the studies attempting to be responsive to those criticisms. (See 8/28/24 Hr'g Tr. 39:25-40:12, 59:18-60:19.) The peer review factor thus favors admissibility.

c. Known or potential rate of error

Daubert directs that, “in the case of a particular scientific technique, the court ordinarily should consider the known or potential rate of error.” 509 U.S. at 594. The parties most debate the evidence supporting this factor, with Defendant arguing that the potentially higher false positive error rate should lead to exclusion of the evidence or, alternatively, to limiting the scope of the conclusions to which Mr. Smith can testify.

The Government submitted numerous studies in support of its position that the false positive error rate in the field is low, often between 0 and 2 percent. (See Gov.'s Ex. 10 (listing studies and error rates); Gov.'s Ex. 4 (Keisler study); Gov.'s Ex. 5 (2020 Smith, Beretta barrel fired bullet validation study), Gov.'s Ex. 6 (2022 Monson, Smith, Peters, Accuracy of comparison decision by forensic firearms examiners); Gov.'s Ex. 7 (the Guyll study), Gov.'s Ex. 11 (2021 Knowles, et al., The validation of 3D [VCM] in the comparison of expended cartridge cases) (“the Knowles study”), Gov.'s Ex. 12 (2015 Weller et al., Introduction and Initial Evaluation of a Novel Three-Dimensional Imaging and Analysis System for Firearm Forensics) (“the Weller study”), Gov.'s Ex. 13 (2018 Duez et al., Development and Validation of a Virtual Examination Tool for Firearm Forensics) (“the Duez study”), and Gov.'s Ex. 14 (the Chapnick study).) Some of the studies relied upon by the Government, however, suffer from some weaknesses that undercut their value in determining reliability. For example, in the Knowles study (Gov.' Ex. 11), there were only 13 examiners, a small sample size, and its focus was on false negative error rates, not false positive error rates. (See 3/26/24 Hr'g Tr. 205:20-24; 8/28/24 Hr'g Tr. 41:9-20, 135:10-16.) The Weller study (Gov.'s Ex. 12) evaluated instruments and the algorithm, so its focus was on the performance of the database and the interaction with the VCM, rather than the ability of examiners. (See 3/26/24 Hr'g Tr. 204:20-205:8; 8/28/24 Hr'g Tr. 135:18-23.) The Duez study (Gov.'s Ex. 13) had too small a sample size and was too slanted towards detecting true positives to be useful as a properly designed validation study per PCAST recommendations. (See 8/28/24 Hr'g Tr. 135:4136:4.)

Nevertheless, despite the noted limitations of certain studies, there are at least five properly designed firearm and toolmark validation studies, per the PCAST Report recommendations: the Ames I Study, the Keisler study, the Chapnick study, the 2021 FBI/Ames study, and the 2023 Guyll study. (See 8/28/24 Hr'g Tr. 57:21-60:10, 60:11-19.) These studies produced the following false positive error rates for cartridge cases: Ames I Study (1.01%), the Keisler study (0/0), the Chapnick study (0.43%), the 2021 FBI/Ames study (0.93%), and the 2023 Guyll study (1% false positive error rate). (See Gov.'s Ex. 15 at 41, 47.) The false positive error rates reported in the studies are all less than the 5% limit recommended by the PCAST Report. (See 3/26/24 Hr'g Tr. 121:21-122:21, 123:2-125:3.)

Generally, statisticians do not report a zero percent error rate, because it could indicate that the study design did not give participants enough opportunities to make an error, and that a larger sample size would produce an error rate above zero. (See 8/28/24 Hr'g Tr. 60:21-61:16.) To account for this probability, statisticians generally report confidence intervals. (See id. at 60:21-62:1.) For the Keisler study, Keisler later wrote a letter to the AFTE Journal giving a bounded confidence interval for the previously reported zero error rate. (See id. at 62:2-8.)

Validation studies in the field thus calculate potential error rates of firearms and toolmark analysis. (See Gov.'s Ex. 15 at 41; 8/28/24 Hr'g Tr. 86:15-90:10.) The error rate data reported in these studies, however, have their limitations: the rate in each study is not a known, comprehensive error rate for the discipline, but rather specific to the study from which it came; it is not predictive of what will occur in the field; it is not the same as a false conviction rate; and it does not apply to any particular examiner or laboratory. (See 3/26/24, Hr'g Tr. 111:11-112:3, 130:13-131:20; 8/29/24 Hr'g Tr. 257:22-24.)

An additional criticism of the reported potential error rates surrounds how the studies accounted for inconclusive results. In the above-noted five studies, the false positive error rate was calculated as the number of false positives divided by the total number of all decisions made throughout the test, including inconclusive results. (3/26/24 Hr'g Tr. 209:6-11.) Inconclusive results, however, were not counted as an error, but rather as a correct answer, and they were calculated in the total number of all decisions. (See 3/26/24 Hr'g Tr. 115:12-116:16, 174:1-4; 8/28/24 Hr'g Tr. 74:2-10.) The design where an inconclusive is automatically considered a right answer creates a benefit, and not a cost to answering inconclusive. (8/28/24 Hr'g Tr. 70:5-15.) So, for example, an examiner could give 100 inconclusive answers and still get a perfect score under the validation studies. (See 3/26/24 Hr'g Tr. 177:22-25.) The Ames I Study, for example, showed examiners reporting 30% of known nonmatches as inclusive, which could affect the confidence interval for the error rate. (See 8/28/24 Hr'g Tr. 85:3-20.) There is some evidence that the use of inconclusive results in the studies is higher than in case work. (See 8/28/24 Hr'g Tr. 68:3-25.)

Given these results, a strong, unresolved debate has emerged between academics in the field about how to account for inconclusive results. (See 8/28/24 Hr'g Tr. 83:1-84:22, 90:23-91:1, 117:5-8.) The false positive error rates reported in the five studies noted above are not universally accepted, as some scholars believe the treatment of inconclusive results as correct is masking higher error rates. (See id. at 94:19-95:1.) Some scholars advocate treating the inconclusive results always as an error, which greatly increases the error rate. (See id. at 83:25-87:14.) Others in the field prefer considering an inconclusive result that leaned to a false positive as an incorrect result, a calculation that also results in increasing the range of the possible error rate. (See id. 77:14-78:25 (noting that this calculation applied to the Ames II study resulted in the error rating moving from 0.92% to an error rate up to 7.16%).) Some academics prefer a confidence interval to account for the inconclusive results, pretending inconclusive results are all identifications or all eliminations to create a range. (See id. at 88:2-24.) Other academics, however, continue to believe that an inconclusive result is not an error and should be considered an appropriate decision. (See 8/29/24 Hr'g Tr. 224:10-227:4.)

The Court recognized Dr. Salyards as an expert in forensic science standards, research design and analysis, and forensic validation studies. (8/28/24 Hr'g Tr. 34:6-13.) In Dr. Salyards' opinion, inconclusive results should not automatically be counted as wrong, but they also should not be all counted as right, so he prefers using a confidence interval to show the uncertainty of measurement. (See id. at 89:23-90:10.) Dr. Salyards believes the confidence interval for the error rate is larger than what is currently being reported in the five aforementioned studies. (See id. at 94:19-95:1.) Nevertheless, while he finds certain weaknesses in the current firearm and toolmark validation studies, those weaknesses do not make them invalid. (Id. at 155:11-18.)

Dr. Salyards discussed other concerns in the studies. For example, he noted that participants who drop out of a study before its completion result in missing data that affects the uncertainty in the error rate. (See 8/28/24 Hr'g Tr. 117:2125, 121:10-122:2.) Study designers, however, cannot compel participants to complete the studies, and Dr. Salyards acknowledged that the drop-out rate does not mean the error rate is necessarily higher; instead, the confidence interval is larger. (See id. 194:4-195:13.) Additionally, the FBI/Ames II study results regarding the repeatability and reproducibility of results concerned Dr. Salyards. (See id. at 96:8-20, 97:15-106:13.) A dispute in the field exists as to whether the results of the FBI/Ames II study showed good agreement or not for its repeatability and reproducibility results. (Compare 8/28/24 Hr'g Tr. 97:15-106:13, with 8/29/24 Hr'g Tr. 213:13-215:11.) Despite the identified weaknesses in the studies that Dr. Salyards believes exist, he admitted that the five above-identified studies were valid and followed PCAST Report recommendations.

Having considered all the evidence submitted at the hearing, the Court concludes that the potential rate of error factor weighs in favor of admissibility. A review of the black box studies meeting the PCAST Report recommendations indicates that false positive error rates are around 1%, satisfying the PCAST Report's critique. Moreover, the black box studies did not have a verification step, as is used by accredited labs, so the error rate could be even lower in the field. (See 3/26/24 Hr'g Tr. 108:1-6; 119:12-16.) These tests are also designed to ferret out what the upper boundaries would be for a false positive. (See 3/26/24 Hr'g Tr. 179:20-21.) These error rates indicate the methodology used in the field is reliable and avoids false positive errors.

The accuracy of the error rate data is subject to valid criticisms, particularly regarding how to account for inconclusive results. Using confidence intervals to account for inconclusive results increases the potential error rate. Nevertheless, the concern at issue here is the error rate for which an examiner makes a false positive identification. See, e.g., United States v. Harris, 502 F.Supp.3d 28, 39 (D.D.C. Nov. 4, 2020). The inconclusive results are not false positives, and thus not inculpatory. (See 3/26/24 Hr'g Tr. 131:22-132:11; Gov.'s Ex. 15 at 49.) As one court persuasively explained, “while an inconclusive result is an error insofar as it means the methodology did not produce an answer, it is not an error in the sense that it falsely attributes a cartridge or casing to the wrong firearm.” United States v. Rhodes, Case No. 3:19-cr-00333-MC, 2023 WL 196174, at *4 (D. Ore. Jan. 17, 2023) (concluding that the error rates for toolmark analysis weighed strongly in favor of admissibility). Moreover, a consensus has yet to emerge that the method used in the five identified studies to report false positive error rates is necessarily wrong or that it invalidates those studies. For all the foregoing reasons, the Court finds that the potential rates of error that are reported in the studies following the PCAST Report recommendations for study design support reliability and admissibility. See United States v. Brown, 973 F.3d 667, 704 (7th Cir. 2020) (“Although the error rate of [AFTE] method varies slightly from study to study, overall it is low in the single digits and as the district court observed, sometimes better than algorithms developed by scientists.”).

d. Existence and maintenance of standards controlling the technique's operation

The AFTE theory is the standard for identification used in the firearm and toolmark analysis field. (See 3/26/24 Hr'g Tr. 133:8-15.) The theory, however, relies on a subjective evaluation in determining whether the toolmark impressions are sufficient to warrant an identification or elimination. The “sufficient agreement” standard is the subject of valid criticism for its circularity. The first condition of the AFTE theory is based on the reference a toolmark examiner has in his head of the level of toolmark agreement and the best-known non-matches. (Id. at 166:8-11.) Measurements are analyzed at and limited to the class characteristics stage of the examination, not the individual characteristic comparison stage. (See id. at 161:10-162:8 (describing measuring depth of individual characteristics as “extremely novel”).) The AFTE theory does not employ a numerical standard. (Id. at 167:2.)

Lower courts are divided about whether the Government can meet the maintenance-of-standards factor given the subjectivity and circularity of the AFTE theory. Compare Harris, 502 F.Supp.3d at 41-42 (concluding lack of objective standards means existence-and-maintenance-of-standards factor could not be met, but noting that subjective methodology is not necessarily a dispositive factor); with United States v. McCluskey, CR. No. 10-2734 JCH, 2013 WL 12335325 (D.N.M. Feb. 7, 2013) (“[T]he AFTE training courses and CTS proficiency testing (with all of its limitations) demonstrate the existence of standards governing the methodology of firearms-related toolmark examination to enable a properly trained examiner to provide in-court technical testimony that will be sufficiently reliable and helpful to a lay jury to assist the jurors in determining whether bullets or cartridges have been fired from a particular firearm.”). Courts finding that the factor weighs in favor of admissibility rely on the standards that control the quality of the training of examiners, ensure the proficiency of the examiners, and govern laboratory accreditation. See, e.g., Rhodes, 2023 WL 196174, at *5-6 (finding that identifiable-standards factor weighed in favor of admissibility, despite subjectivity, based on standards that included “a specific laboratory's standard operating procedures and guidelines; International Organization for Standardization (“ISO”)/International Electrotechnical Commission (“IEC”) Standard 17025; training, monitoring, validation of procedures, and regular proficiency testing to ISO/IEC Standard 17034” as well as verification processes that served as quality control). Some courts finding that this factor weighs against admissibility have nonetheless used the same evidence of standards for training, proficiency, and laboratory accreditation to support their conclusion that the expert testimony was admissible. See, e.g., Harris, 502 F.Supp.3d at 41 (finding that lack of objective standards in AFTE's “sufficient agreement” analysis means standards factor could not be met, but that balance of other factors and fact that subjective methodology is not per se unreliable weighed in favor of admission of expert testimony).

While the agreement in individual characteristics may be a subjective call relying on the examiner's judgment, it is one based on extensive training and experience of the examiner. Richardson, 2024 WL 961228, at *7. Examiners review thousands of known non-matches in their training program in pattern-analysis, and based on that knowledge and experience, the examiner subjectively evaluates whether he thinks the similarities in the toolmark impressions are sufficient to warrant an identification. (See 3/26/24 Hr'g Tr. 13:15-18:12, 166:8-25.) Examiners undergo regular proficiency testing to demonstrate that they can reliably apply the subjective methodology to make correct identifications in the field. Competency standards help ensure examiners are qualified. Verification processes help to guard against errors. Laboratories are also subject to numerous standards in the accreditation process, and they follow a rigorous quality assurance process and a detailed set of protocols, including proficiency testing and audits. (See Mar. 26, 2024, Hr'g Tr. 134:1-137:11.) Accordingly, this Court agrees with the reasoning of the courts that have concluded that the considerable training and proficiency standards in the field mitigate the lack of objective standards in the AFTE theory by ensuring that the examiners using their subjective judgment are basing their decisions on considerable training and have passed, and continue to pass, competency tests. While the standards control the proficiency of the examiner, and are a step removed from an objective standard for the technique itself, the proficiency standards nonetheless help control the technique's operation. Consequently, the Government has met this fourth factor.

e. General acceptance of the theory

Numerous professional organizations, including AFTE, the American Academy of Forensic Sciences, and the International Association of Identification, as well as other international organizations, recognize firearm and toolmark identification as a forensic science discipline. (See 3/26/24 Hr'g Tr. 97:2-99:7; Gov.'s Ex. 15 at 30.) Firearms analysis has been represented in the forensic field as early as 1915. (3/26/24 Hr'g Tr. 98:25-99:7.) Many research organizations provide funding for studies of firearm and toolmark identification. (See id. at 99:13-100:16; Gov.'s Ex. 15 at 31.) More than 50 higher education institutions offer degrees in forensic science with programs and classes in firearm and toolmark identification. (See 3/26/24 Hr'g Tr. 103:1-104:7; Gov.'s Ex. 15 at 33.)

Courts have observed that the AFTE theory of firearms and toolmark identification is widely accepted in the forensic community and, specifically, in the community of firearm and toolmark examiners. See, e.g., Hunt, 63 F.4th at 1249 (noting with approval that district court found AFTE method is widely used by firearms examiners); Brown, 973 F.3d at 704 (same). Defendant argues that examiners have a vested, career-based interest in the acceptance, while outside scientists have sharply criticized the field. The AFTE theory, however, continues to undergo testing and research, and despite the criticisms, has continued general acceptance among both professional examiners and federal courts as a reliable method of firearms and toolmark identification. This fifth factor weighs in favor of admission.

f. Weighing Daubert factors

The current subjectivity in determining whether there is “sufficient agreement” between individual characteristics in toolmark patterns and the limitations in determining the known error rates in the validation studies warrant caution in admitting expert testimony on firearm and toolmark analysis. See Hunt, 63 F.4th at 1244. The lack of objective standards, however, is not dispositive in a Daubert analysis. See, e.g., United States v. Baines, 573 F.3d 979, 991 (10th Cir. 2009) (explaining that fact that process depends on subjective judgment of fingerprint analyst “does not, in itself, preclude a finding of reliability”). Kumho Tire instructs that flexibility is needed for certain expert testimony, and that is particularly true for expert testimony based largely on experience and training.

Standards helping to ensure the competency of examiners in the field mitigates the lack of objective standards in the AFTE theory. Moreover, there is a degree of objectivity in the analysis regarding the alignment of markings or striae in the cartridges under comparison. Richardson, 2024 WL 961228, at *7. See also United States v. Alvin, Case No. 22-20244-CR-GAYLES/TORRES, 2024 WL 149288, at *6 (S.D. Fla. Jan. 5, 2024) (“[W]hile in part subjective, a ballistics expert's conclusions rest upon articulate observations and are grounded in tangible physical evidence, which can be subject to challenge through cross-examination in court.”). Validation studies in the field confirm, despite the debate on confidence intervals, that properly trained experts following standards in the field have low false positive error rates. The method is testable, peer-reviewed, and is accepted in the relevant scientific community. The overall balance of factors weighs in favor of finding the E3CV methodology used by Mr. Smith based on the AFTE theory reliable. Cf. Baines, 573 F.3d at 990 (concluding that evidence of error rate in field of fingerprint examination supported decision to admit expert testimony, in part, because very few mistakes were reported in testing that trainees must complete before progressing to actual casework and where expert testified he always attained perfect mark on proficiency tests); United States v. Pete, Case No. 3:22cr48-TKW, 2023 WL 4928523, at *6 (N.D. Fla July 21, 2023) (denying motion to exclude firearm identification evidence, despite that AFTE “sufficient agreement” standard is largely dependent on examiner's subjective judgment, because examiner is drawing conclusions from observable, verifiable marking on evidence, and other Daubert factors favored admission), aff'd by No. 23-14112, 2024 WL 4040388 (11th Cir. Sept. 4, 2024) (unpublished).

3. Mr. Smith's testimony is relevant and helpful to the jury.

Mr. Smith's testimony is relevant to whether the firearm the police recovered is the same firearm that the offender fired at the scene of the carjacking. His testimony and specialized knowledge will help the jury understand the evidence and determine a fact in issue.

4. The Court will permit Mr. Smith to offer expert testimony, subject to the DOJ standards for uniform language for testimony.

Based on the record, the Government has met its burden to demonstrate that Mr. Smith has extensive qualifications in the field, his testimony is relevant to a key issue in the case, sufficient facts and data support the reliability of the methodology he used, his testimony is the product of reliable principles and methods, and he reliably applied the principles and methods to the facts of this case. The Government's proposed testimony of Firearms and Toolmark Examiner Erich Smith is thus admissible at trial under Rule 702 and Daubert, subject to the limitation discussed below. Cf. Brown, 973 F.3d at 702-04 (affirming district court's admission of expert testimony of firearm and toolmark examiners who testified, among other things, that cartridge casings found at different scenes were fired by same firearm because AFTE methodology had been almost uniformly accepted by federal courts, has been tested and subjected to peer review, was widely accepted beyond judicial system, and error rates of method in studies was low (in single digits)).

Defendant nevertheless requests that, should the Court not exclude Mr. Smith's testimony under Daubert, that it should impose limits on the testimony in line with United States v. Briscoe. Having reviewed the Briscoe decision, the Court disagrees that the limitations imposed in that decision are warranted here. However, the Court finds that one limitation and clarification is appropriate based on the record in this case. The Department of Justice and the FBI have developed standards for uniform language for testimony regarding firearms and toolmark identification. (See Gov.'s Ex. 9; 3/26/24 Hr'g Tr. 94:16-96:2, 133:16-23.) Those standards limit Mr. Smith, among other things, from testifying to any percentage of certainty, to comparing the exam with past exams, or asserting that two toolmarks originated from the same source to any absolute or 100% certainty using expressions like “reasonable degree of scientific certainty” or similar assertions of reasonable certainty. (See id.) Mr. Smith testified that he must follow DOJ protocols. (Id.) The Court finds that the DOJ strictures are reasonable and a sufficient limitation governing Mr. Smith's testimony. See Richardson, 2024 WL 961228, at *11; Harris, 502 F.Supp.3d at 45.

Mr. Smith, however, indicated at the Daubert hearing that he could testify that there is a negligible possibility that there could be another gun out there that produces the same patterns. See 8/29/24 Hr'g Tr. 258:1-7; Id. at 261:6-12 (“Q. If Ms. Jacobs asked you if there was any possibility that the cartridge was fired from some other gun, what would your answer be? A. The possibility is negligible. Based on all the testing and all the research, I still have to admit there is a possibility out there, but it's considered to be negligible.”).) An expression of negligible possibility, however, could be easily construed by the jury as an expression similar to absolute certainty or a reasonable degree of scientific certainty that two toolmarks originated from the same source. The Court will thus limit Mr. Smith from testifying to expressions of negligible uncertainty or negligible possibility, as that expression appears to fall within the contours of the DOJ recommended prohibitions. The Court otherwise imposes no additional restrictions on Mr. Smith's testimony.

Cf. United States v. Blackman, Case No. 18-CR-00728, 2023 WL 3440384, at *9 (N.D. Ill. May 12, 2023) (“[T]he Government has already mitigated concerns about the degree of certainty with which their experts will testify in this case, agreeing in advance that its experts will not testify to 100% certainty in their identifications but rather that based upon their training and experience, they would not expect any other firearm to produce the markings observed. In the same vein, this Court holds that the experts shall not use language that implies the methods are an exact science or reflect any specific statistical degree of certainty (100% or otherwise).”).

B. Rule 16 analysis

1. Legal standard

According to Federal Rule of Criminal Procedure 16(a)(1)(G)(ii), the court must set a time for the government to makes its expert disclosures “sufficiently before trial to provide a fair opportunity for the defendant to meet the government's evidence.” Fed. R. Crim. P. 16(a)(1)(G)(ii). The expert disclosures must contain: “a complete statement of all opinions that the government will elicit from the witness in its case-in-chief,” the bases and reasons for those opinions, “the witness's qualifications, including a list of all publications authored in the previous 10 years,” and a list of all other cases in which the witness has testified as an expert at trial or by deposition in the previous four years. Fed. R. Crim. P. 16(a)(1)(G)(iii). Parties have a continuing duty to disclose newly discovered evidence promptly. Fed. R. Crim. P. 16(c).

If a party fails to comply with Rule 16, a court may prohibit the party from using the undisclosed evidence at trial. Fed. R. Crim. P. 16(d)(2)(C). In determining the appropriate sanction for a Rule 16 violation, the court should consider the reasons the government delayed producing the requested materials, including whether the government acted in bad faith; the extent of prejudice to the defendant resulting from the government's delay; and the feasibility of curing the prejudice with a continuance. United States v. Martinez, 455 F.3d 1127, 1130 (10th Cir. 2006). The district court should impose the least severe sanction that will ensure compliance with its discovery orders. Id.

2. Analysis

a. APD reports

Turning first to Defendant's request to exclude APD reports not already disclosed, the United States asserts that all reports in its possession have been previously disclosed. (Gov.'s Resp. 17, ECF No. 80.) It objects to the exclusion of any report that may come into its possession, given its ongoing duty to disclose. Defendant's request at this point does not appear ripe as it targets hypothetically belatedly disclosed reports. Should a report be filed subsequently, Defendant may raise the issue again and the Court will have better context for a ruling as to whether the Government violated Rule 16 and the appropriate sanction. This request will therefore be denied without prejudice.

b. Mr. Smith's opinions

Defendant next moves to exclude Erich Smith's opinions, arguing that the belated and incomplete filings violated Rule 16. The Government did not seek to test the firearm until November 2023, but the Notice was timely filed according to the Court's order that set the expert disclosure deadline on February 9, 2024. While most of the underlying documents were not disclosed by February 9, 2024, they were made available before the March 25, 2024, trial setting, and that trial setting was continued for well more than seven months. Moreover, Defendant had an opportunity to cross-examine Mr. Smith at the Daubert hearing; to retain his own expert witness, Dr. Salyards, who testified at the continuation of the Daubert hearing; and to cross-examine Mr. Smith again during the Government's rebuttal argument. Accordingly, Defendant had sufficient time before trial to meet the Government's evidence regarding Mr. Smith's opinion testimony, and the Government did not violate the timing provisions of Rule 16.

Additionally, Defendant argues that the Government's expert disclosures did not satisfy Rule 16's requirements for an expert summary. More specifically, Defendant points out the following alleged deficiencies: (1) there is no summary of individual characteristics/toolmarks that may be in agreement and no notes on the five images to help determine what was important to the examiner in reaching his opinion; (2) there is no statement on what toolmark methodology was used; (3) there is no description of the technology program used to capture the images; (4) the report does not explain where the second cartridge case visible in the images came from; and (5) the report does not describe what the named preparer or other examiner did for the analyses. (Def.'s Mot. 11, ECF No. 61.) Defendant asserts that Mr. Smith's report violated SOP 7.1.2.1. (C) for Toolmark and Fracture Examinations that requires all observations of a questioned toolmark to be recorded on the Firearms and Toolmark Discipline (“FTD”) Worksheet, including physical, class, subclass, and individual characteristics. (See Def.'s Ex. J, ECF No. 61-10 at 3 of 6.)

Defendant received Mr. Smith's 59-page report that explained generally the methodology Mr. Smith used, the FTD Worksheet, the VCM images supporting his comparison, and his conclusion. (See 3/26/24 Hr'g Tr. 89:10-92:21; Gov.'s Ex. 16.) The FTD Worksheet listed the class characteristics of the cartridge, including the caliber, the brand of ammunition, what it was made of, the breech face marks, the firing pin shape, and the position of the extractor and ejector. (See Gov.'s Ex. 16 at 51 of 59; 3/26/24 Hr'g Tr. 90:17-91:21.) Mr. Smith's report included the results page in which he concluded that the Item 5 cartridge case was identified as having been fired from the Item 2 pistol. (See Gov.'s Ex. 16 at 52 of 59; 3/26/24 Hr'g Tr. 91:22-92:11.) Page 52 of the report was a snapshot of an area on the breech face that showed defects; images on page 53 highlighted the defects on the breech face, shear, and aperture. (See Gov.'s Ex. 16 at 52-54 of 59; 3/26/24 Hr'g Tr. 91:22-92:21.) These images focused in on the different toolmarks showing what Mr. Smith believed to be the areas of significant agreement. (See 3/26/24 Hr'g Tr. 158:24159:22.) Mr. Smith, however, did not provide any annotations to describe what he observed in the images. (See 3/26/24 Hr'g Tr. 157:6-158:24.) He noted, however, in the narrative that there was a shear, an individual characteristic. (See id. at 163:1-12.) Although Mr. Smith did not list the AFTE theory in his report, he since testified at the Daubert hearing that he used the E3CV method based on the AFTE theory for his analysis. (See id. at 58:5-23, 164:17-24.)

The Government provided to the defense extensive discovery, including Mr. Smith's report, case notes, worksheets, side-by-side comparison photographs taken with the microscope, and the conclusions reached. There is no evidence that the timing of those disclosures was in bad faith. To the extent that Mr. Smith failed to fully describe what he observed in the photographs he provided or to detail each individual characteristic that led to his conclusions, the defense was able to gain that level of detail at the Daubert hearing. That hearing occurred well before trial, so there is no prejudice to Defendant resulting from the lack of detail contained in the summary. As for the delay in trial, much of the delay was caused by the lack of availability of Dr. Salyards, Defendant's expert, and the numerous pending pretrial motions, some of which required evidentiary hearings before trial. The Court finds that Rule 16 does not compel exclusion of the reports or Mr. Smith's testimony based on this record. Cf. United States v. Brown, 592 F.3d 1088, 1089 n.2, 1091 (10th Cir. 2009) (concluding that fingerprint identification expert's summary substantially complied with Rule 16 where it said expert compared defendant's known fingerprints found on cards with latent fingerprint found on document and that she would testify that latent fingerprint was the defendant's fingerprint; summary was not deficient for failing to mention fourteen identical points of comparison or to describe specifically expert's methodology).

IT IS THEREFORE ORDERED that Defendant's Motion to Exclude Trial Witnesses for Federal Rule of Criminal Procedure 16 Violations (ECF No. 61) is GRANTED IN PART AND DENIED IN PART as follows:

1. The Court GRANTS IN PART Defendant's request to limit the scope of Mr. Smith's testimony. Mr. Smith must adhere to the standards set forth in the United States Department of Justice Uniform Language for Testimony and Reports for the Forensic Firearms/Toolmarks Discipline Pattern Examination, which the Court finds excludes testimony that there is a “negligible” possibility that there could be another gun out there that produces the same patterns.

2. In all other respects, the Court DENIES the motion.


Summaries of

United States v. Garcia

United States District Court, District of New Mexico
Oct 3, 2024
No. 22-CR-1171-JCH (D.N.M. Oct. 3, 2024)
Case details for

United States v. Garcia

Case Details

Full title:UNITED STATES OF AMERICA, Plaintiff, v. ADRIAN GARCIA, Defendant.

Court:United States District Court, District of New Mexico

Date published: Oct 3, 2024

Citations

No. 22-CR-1171-JCH (D.N.M. Oct. 3, 2024)