Opinion
Indictment No.: 817/2020
09-08-2023
For the People: ADA Rachel Singer For Defendant: Clinton Hughes and Richard Torres from Brooklyn Defender Services
For the People: ADA Rachel Singer
For Defendant: Clinton Hughes and Richard Torres from Brooklyn Defender Services
Danny K. Chun, J. The defendant moves to preclude the DNA evidence in this case arguing that the High Sensitivity DNA typing, also known as Low Copy Number ("LCN") testing, and the Forensic Statistical Tool ("FST") used here are not methods generally accepted in the relevant scientific community as reliable. The People oppose the motion.
The defendant is charged under Indictment No. 817/2020 with one count of Murder in the Second Degree (PL § 125.25[1]). The People allege that on or about and between September 25, 1980, and September 26, 1980, the defendant caused the death of Lorraine Snell. This cold case was reopened, and the defendant was indicted, in part, by testing the DNA recovered under the victim's fingernail, which was collected at the time of autopsy. The Office of the Chief Medical Examiner ("OCME") report indicates that High Sensitivity PCR DNA typing was performed on the samples from the victim's fingernails. OCME determined that a mixture of DNA from at least two people was detected on one of the fingernails from the victim's right hand ("PMR2") and stated that the mixture was suitable for direct comparison. Snell was included as a contributor to the mixture. OCME then compared the defendant's DNA to the mixture found on fingernail PMR2 and determined that the defendant was a possible contributor to the mixture. OCME used a probabilistic genotyping software called FST to produce a likelihood ratio statistic to give weight to that conclusion. OCME report states that the DNA mixture found on fingernail PMR2 is approximately 476 million times more probable if the sample originated from the defendant and Snell than if it originated from Snell and one unknown, unrelated person. Therefore, OCME concluded that there is a very strong support that the defendant and Snell contributed to the mixture, rather than Snell and one unknown, unrelated person.
The defendant filed a motion to preclude the People from calling an expert witness to testify on their direct case regarding any conclusion reached using either LCN DNA testing method or FST. The defendant argued that the LCN testing and FST were not methods generally accepted in the relevant scientific community as reliable. In addition, the defendant contended that the sample amplified in this case, which was only 19 picograms, went below the limit of the validation for both LCN testing and FST, further rendering the results unreliable. In the alternative, the defendant moved for a hearing to determine the issue. The People consented to a hearing.
This court ordered a Frye hearing. See Frye v. United States , 293 F. 1013 (D.C. Cir. 1923). The Frye hearing commenced on February 18, 2022 and concluded on November 18, 2022. The People called four witnesses: (1) Dr. Craig O'Connor, (2) Natasha Harvin-Locklear, Esq., (3) Dr. John Buckleton and (4) Dr. James Curran. The defendant also called four witnesses: (1) Dr. Dan Krane, (2) Dr. Jeanna Matthews, (3) Dr. Angela van Daal and (4) Nathaniel Adams. Following the hearing, both sides submitted briefs.
The following constitutes this court's findings of fact and conclusions of law.
Findings of Fact ( Frye Hearing Testimony)
The People's Witnesses
(1) Dr. Craig O'Connor
Dr. Craig O'Connor testified that he is the Assistant Director at the Department of Forensic Biology at OCME, a position he has held since 2017 (February 18, 2022 tr at 8-9). He is also the Technical Leader for nuclear DNA testing and serological testing at OCME (February 18, 2022 tr at 9). Dr. O'Connor has worked at OCME since 2008, and prior to becoming the Assistant Director, worked as a Criminalist II, Criminal III and a Criminalist IV (February 18, 2022 tr at 10, 13). At his current position, Dr. O'Connor oversees the technical operations of the laboratory, has the ability to put techniques online or take them offline and reviews validation studies and signs off on them (February 18, 2022 tr at 9). Dr. O'Connor is also responsible for making sure that the work being done at the laboratory meets the relevant accreditation and quality assurance requirements (February 18, 2022 tr at 10). Dr. O'Connor has a Bachelor of Science in physiology and neurobiology, a Master of Science in genetics and genomics and a PhD in genetics and genomics all from the University of Connecticut. Id. Dr. O'Connor has analyzed and reviewed DNA evidence using LCN testing hundreds of times (February 18, 2022 tr at 16).
Dr. O'Connor took statistics courses both at the undergraduate level and at the graduate level, covering basic statistical analysis as well as a more advanced course in population genetics, which was focused on statistics seen from a genetics level and populations of individuals (June 2, 2022 tr at 996). In addition, at OCME, Dr. O'Connor received forensic statistics training from internal and external trainers, and since 2015, Dr. O'Connor has been training the analysts (June 2, 2022 tr at 997). This lecture involves background knowledge on the fundamentals of population genetics and statistics, such as Mendelian genetics, as well as the application of those statistics to DNA casework for autosomal, Y-STR and mitochondrial STRs. Id. The training also covers how probabilistic genotyping is done from the binary, semi-continuous and fully continuous standpoints (June 2, 2022 tr at 998).
Dr. O'Connor has been trained in the use of FST and has used FST hundreds of times on criminal casework. Id. In addition, Dr. O'Connor has reviewed other analysts’ casework hundreds of times, which included the use of FST. Id. Dr. O'Connor has given lectures on the use of FST and how OCME had validated it (June 2, 2022 tr at 998-999). Dr. O'Connor has testified at evidentiary hearings involving FST three times and at trials several times (June 2, 2022 tr at 999). Dr. O'Connor was not involved in the design or the development of FST, and he has never been a developer of any other probabilistic genotyping tool (June 2, 2022 tr at 1001).
Dr. O'Connor was qualified as an expert in the field of forensic biology, including DNA analysis (February 18, 2022 tr at 35). In addition, while not offered as an expert in statistics, he was permitted to give his opinion on certain areas of statistical analysis tools. Id.
LCN testing is performed when there is a low amount of DNA (February 18, 2022 tr at 39). For LCN testing, the testing procedures are modified in order to increase the sensitivity to get at that low amount of DNA, and also the interpretation protocols are modified in order to account for that increase in sensitivity (February 18, 2022 tr at 39-40). OCME created the Low Copy Number Section at the laboratory around 2002 and started using LCN testing in 2006 after it was validated (February 18, 2022 tr at 38-39). Due to the highly sensitive nature of the testing, OCME had a separate laboratory within the building that was dedicated to LCN testing in order to avoid cross contamination (February 18, 2022 tr at 40, February 28, 2022 tr at 112). Dr. O'Connor testified that when LCN was brought online at OCME, LCN was a modification of an existing technique (February 28, 2022 tr at 109). LCN procedure has been done in different countries, and OCME modeled some of its procedures from the United Kingdom's Forensic Science Service's ("FSS") LCN procedure. Id.
The basic steps of DNA testing are: (1) DNA extraction, (2) quantitation, (3) PCR amplification and (4) analysis by running the DNA through capillary electrophoresis in order to produce a DNA profile (February 28, 2022 tr at 68-71, 80). The amplification cycle for standard DNA typing or high copy number testing is 28 cycles, but for LCN, it was 31 cycles (February 28, 2022 tr at 71). In addition to the three extra cycles, OCME did the amplification three times ("triplicate amps") in order to account for the increase in sensitivity, as opposed to once or twice (February 28, 2022 tr at 113, 116). The 31 cycles were done at OCME by a typing kit called Identifiler (February 28, 2022 tr at 71).
OCME's LCN validation took upwards of four years from the beginning until when it went online (February 28, 2022 tr at 117). OCME had a group of five or six scientists assigned to that validation full-time, and the validation consisted of over 800 samples with sensitivity studies from 100 picograms to 6.25 picograms (February 28, 2022 tr at 117-118). Sensitivity study is looking at different DNA sample amounts that can be used in the testing, and then extrapolating from that information to determine the interpretation procedure (February 28, 2022 tr at 120). During the validation study, the scientists were able to get results from all of the sample sizes from 150 picograms to 6.25 picograms (February 28, 2022 tr at 120-121).
During the LCN validation, OCME also completed all the validation studies recommended by SWGDAM, although at the time, SWGDAM did not have guidelines specific to LCN (February 28, 2022 tr at 121, 197). In 2014, SWGDAM published guidelines for STR enhanced detection methods (March 11, 2022 tr at 196-197). The guidelines were not an endorsement of that methodology, but it stated what the best practices were for labs that were performing enhanced detection methods for STR typing (March 11, 2022 tr at 197). It was Dr. O'Connor’s opinion that the majority, if not all, of the guidelines recommended by SWGDAM were in line with OCME's LCN protocols that were already in effect (March 11, 2022 tr at 198). OCME did not have to modify their LCN protocols in order to be in compliance with SWGDAM guidelines. Id. In Dr. O'Connor’s opinion, the fact that SWGDAM came out with these guidelines showed that they were acknowledging that this is a methodology that is being used within the field (March 11, 2022 tr at 204).
SWGDAM is Scientific Working Group on DNA Analysis Methods, which is within the FBI made up of scientists and practitioners that look at DNA analysis methods and come up with guidelines and recommendations for laboratories that are performing DNA testing (February 18, 2022 tr at 43). SWGDAM Standards Guidelines are used by most accredited laboratories in the United States (February 18, 2022 tr at 44).
One of the definitions of extrapolation is looking beyond your observation, meaning, taking your data and making inferences outside of the values that your data was, because it would be unfathomable to try to test every data point from zero to infinity (April 22, 2022 tr at 703). One of the ways OCME extrapolated was by referring to the lower bounds for LCN mixtures. Id. It was pointed out that SWGDAM guidelines do not mention the word, "extrapolation," in connection to the lower limits (April 22, 2022 tr at 704-706). In addition, FBI Quality Assurance Standards do not mention extrapolating the lower limits (April 22, 2022 tr at707). LCN and FST validations papers that were published do not mention extrapolating their lower limits, but Dr. O'Connor testified that extrapolating is something that is typically done in science (April 22, 2022 tr at 707-708). Reproducibility, in terms of DNA testing, is showing that a process can give similar or same results each time a testing is performed (February 28, 2022 tr at 122). With LCN procedures, due to stochastic effects or random sampling effects, you would not expect to get the exact same result each time it is run. Id. However, Dr. O'Connor testified that you would expect to get similar or same conclusions as to whether it is a mixture of two or three people or whether a person is included or excluded. Id. That conclusion would be reproducible. Id.
Stochastic effects are sampling errors that take place during the amplification step, which can occur with high template samples too but are more common with low template samples (February 28, 2022 tr at 124). Stochastic effects are stutter products, allelic drop-in, allelic drop-out and peak imbalance (February 28, 2022 tr at 125). At each PCR cycle during a DNA analysis, the amount of DNA is being doubled in a sense (March 11, 2022 tr at 130). Dr. O'Connor explained that if we think of DNA as a fragment of substance, if there is enough fragment in there when the copying process begins, it grabs a decent amount of DNA that can be copied (March 11, 2022 tr at 131). However, if the sample is a low amount of DNA and a fragment is not grabbed until the tenth cycle or the 20th cycle, by the end, there is going to be much less of that fragment or that allele than the other alleles, which is how you get peak imbalance. Id. If it happens to only grab one or two fragments or does not grab any fragments in the first couple of cycles, then there is nothing to copy and at the end of the PCR process that fragment is not represented, which is called drop-out (March 11, 2022 tr at 131-132). Drop-in is when you have a small fragment of DNA, whether it is from contamination or from a very, very minor contributor that gets into the process somewhere during the PCR process (March 11, 2022 tr at 132). If during the amplification procedure, DNA polymerase, which is an enzyme that facilitates the amplification, ends up creating small byproduct alleles at a much lower percentage, this is called stutter (March 21, 2022 tr at 344).
Dr. O'Connor testified that OCME knew that stochastic effects are bound to happen and made modifications to the LCN testing method to take into account these effects (February 28, 2022 tr at 124, March 11, 2022 tr at 133). One of the modifications made by OCME to LCN testing was conducting the PCR process three times ("triple amps") and taking the consensus of the three processes to better represent the sample (March 11, 2022 tr at 132-134). By combining the results of the three amplifications, one can adjust for the fact that a peak height imbalance took place or drop-out or drop-in occurred because of the stochastic effects (March 11, 2022 tr at 135-136). Stutter is a naturally occurring phenomenon that DNA analysts cannot eliminate, but can anticipate and account for (March 21, 2022 tr at 345).
Prior to OCME beginning the LCN validation process, stochastic effects of LCN testing were well documented in scientific literature, discussed at various meetings, and it was well known that the stochastic effects increase with LCN testing (March 11, 2022 tr at 136). Dr. Bruce Budowle, who was the head of the laboratory division at the FBI, wrote an article in 2001 discussing the considerations and cautions that a laboratory should use when performing LCN typing, which was one of the articles OCME used as a guidance (March 11, 2022 tr at 137-139). In addition, Dr. Peter Gill, who was the head of the UK's Forensic Science Services and a proponent of LCN, published many articles on forensic DNA and LCN testing including consensus profiles with replicate amplifications, which OCME relied on for the validation and interpretation of LCN procedures (March 11, 2022 tr at 139, 156). Countries that had used LCN testing included Australia, the UK, Germany, Netherlands, New Zealand and Switzerland (March 11, 2022 tr at 139). New Zealand, in particular, used upwards of 34 cycles for their LCN testing, whereas OCME used 31 (March 11, 2022 tr at 139-140). However, Dr. O'Connor was not aware of what type of admissibility standards existed in these countries (March 24, 2022 tr at 404). Two peer-reviewed journal articles authored by the Netherlands Forensic Institute in 2010 and 2012 concluded that for LCN and low template DNA samples, the consensus profiles are the preferred method (March 11, 2022 tr at 141-149).
The New York State Commission of Forensic Science ("Commission") accredits OCME to conduct forensic DNA testing on criminal cases in New York State (February 18, 2022 tr at 17). The Commission is also responsible for approving new methodologies, which will be reviewed and validated by the DNA Subcommittee ("Subcommittee") (February 18, 2022 tr at 17-18). The DNA Subcommittee is made up of scientists, professionals, professors and lab directors within the field of DNA analysis and practices (February 18, 2022 tr at 46). DNA Subcommittee gives binding recommendations to the Commission (March 15, 2022 tr at 235). OCME is also accreted nationally by ANAB, which is the American National Standards Institute ("ANSI") National Accreditation Board (February 18, 2022 tr at 40). OCME is required to follow the FBI's Quality Assurance Standards, which set standards on how to do a validation (February 28, 2022 tr at 56, April 6, 2022 tr at 561).
Prior to LCN testing going online at OCME, the laboratory appeared before the DNA Subcommittee of the New York State Forensic Science Commission to discuss the use of LCN testing (March 11, 2022 tr at 159-160). The first DNA Subcommittee meeting regarding LCN testing was on May 17, 2005, where the LCN validation studies including the procedures and what OCME was planning to do was presented (March 11, 2022 tr at 161). The next DNA Subcommittee meeting regarding LCN testing was on September 9, 2005 (March 11, 2022 tr at 164). At that meeting, in addition to OCME answering questions from the Subcommittee members, Dr. David Werrett, a member of the Subcommittee and a full-time scientist at the FSS in the UK, gave a presentation on how LCN testing is used at their laboratory (March 11, 2022 tr at 167-168). On October 6, 2005, the DNA Subcommittee wrote a letter to the Chair of the New York State Commission of Forensic Science approving OCME's validation of LCN testing (March 11, 2022 tr at 168-169). On December 6, 2005, OCME's use of additional cycles in LCN testing was discussed before the Commission (March 11, 2022 tr at 176). On December 15, 2005, the Commission issued a letter approving LCN testing, including the increased cycle number for LCN DNA testing (March 11, 2022 tr at 180-181). In early 2006, OCME went online with LCN DNA testing (March 11, 2022 tr at 182). In the following year, on August 22, 2006, OCME was before the DNA Subcommittee to discuss proficiency testing of analysts that were trained in performing LCN testing (March 11, 2022 tr at 183, 186). Following the meeting, the Subcommittee found that the responses of OCME were satisfactory in regard to the concerns expressed by the Commission of Forensic Science (March 11, 2022 tr at 187).
The meeting minutes of New York State Commission on Forensic Science dated December 6, 2005, which is the date LCN was approved, stated that a Commission member Peter Neufeld, Esq. suggested that proficiency tests be developed by the lab at the stated minimum detection level of 20 picograms (April 29, 2022 tr at 717). Several members noted that proficiency tests are not generally manufactured at minimum threshold values as the threshold is determined through test validation studies and the use of controls during the analysis. Id. After discussion, it was agreed that the minimum threshold proficiency testing issue would be referred to the DNA Subcommittee for further review and recommendation. Id. However, Dr. O'Connor testified that he was told during LCN training that OCME had never said that the minimum detection level for LCN testing was 20 picograms (April 29, 2022 tr at 718). Dr. O'Connor was aware that Dr. John Butler from the National Institute of Standards and Technology ("NIST") has stated over the years that validation should establish the minimum limitation of a technique, and Dr. O'Connor agreed with this opinion (April 29, 2022 tr at 718-719, 722).
In 2014, the Forensic Science Commission requested additional review of portions of the LCN validation, specifically, the lower limits of LCN testing, asking whether the OCME's procedures had changed since the initial validation and if there were changes, whether there were validations to support them (March 15, 2022 tr at 222, 225-226). On June 2, 2014, the DNA Subcommittee issued a letter stating that the Subcommittee unanimously found that scientifically, there was no lower limit in the quantity of DNA that must be present before LCN testing could be employed (March 15, 2022 tr at 225-228). It was pointed out that this question and answer did not include the words "validation study" (April 22, 2022 tr at 695). Dr. O'Connor testified that OCME, based on their validation, does employ a lower limit of one picogram per microliter or five picograms total (April 22, 2022 tr at 697).
In order to answer the remaining questions, in August of 2014, the Subcommittee visited OCME on two separate dates (March 15, 2022 tr at 228). On September 5, 2014, the DNA Subcommittee held a meeting and voted that: (1) sufficient validation has been conducted on new instrumentation and software that had been implemented at OCME since 2005 and (2) there had been no substantive changes made to the LCN DNA procedure since its approval in 2005 (March 15, 2022 tr at 230-232). On September 16, 2014, the DNA Subcommittee sent a letter to the Chair of the Commission reporting back its findings (March 15, 2022 tr at 232-233). Dr. O'Connor testified that DNA Subcommittee members change over time, and the individual members were different between 2005, when LCN was approved, and 2014 (March 15, 2022 tr at 234). In 2009, approximately three years after OCME started LCN testing, OCME LCN validation summary was published in the Croatian Medial Journal, a peer-reviewed journal (March 11, 2022 tr at 149-150). In addition, OCME presented the LCN validation studies at dozens of conferences and workshops, such as the American Academy of Forensic Sciences and the International Symposium on Human Identification (March 11, 2022 tr at 155). It was pointed out during cross-examination that the entire validation summary, which is 603 pages, was never published in a journal (April 6, 2022 tr at 549).
Dr. O'Connor testified that all DNA testing involves a certain amount of subjective interpretation by an analyst, who will look at the data and ensure that the conclusions that they were reaching were supported by the data they were seeing and that the procedures were supported by the validation studies (March 11, 2022 tr at 191). However, it was Dr. O'Connor’s opinion that there was less room for interpretation in LCN testing because the procedures and protocols are pretty strict when it comes to how to interpret the samples and they err on the side of caution (March 11, 2022 tr at 192). To interpret low copy number samples, OCME modified its protocols from the high copy number protocols to account for the stochastic effects and provided extra training to analysts tasked to perform them (March 21, 2022 tr at 352). However, Dr. O'Connor stated that he would not agree that interpreting high template samples is easier than interpreting low template samples. Id. Both require training, experience and validation. Id. Dr. O'Connor did agree that, in general, DNA mixture is more difficult to interpret than single source DNA due to factors such as allele sharing or stochastic effects (March 21, 2022 tr at 354-355). In addition, the interpretation can be more difficult if some of the components are low template (March 21, 2022 tr at 356).
During cross-examination, Dr. O'Connor testified that drop-out makes accurately identifying the number of contributors in a sample or identifying the genotypes in a sample more difficult (March 21, 2022 tr at 339). And since drop-out is more frequent with low copy number samples, it has a more significant effect on them. Id. However, Dr. O'Connor testified that since you know that drop-out is going to take place, you would account for that in your interpretation process. Id. Similarly, drop-in is more common with low template DNA samples than with high template DNA samples (March 21, 2022 tr at 343). Problems such as drop-out and drop-in led to probabilistic genotyping programs, which takes into account such effects (March 21, 2022 tr at 340, 344). Dr. O'Connor also testified that stutter is easier to identify in high template samples (March 21, 2022 tr at 345). In addition, stutter can be mistaken for a true allele and if there is a true allele at the same location that a stutter appears, the stutter can make a true allele appear taller (March 21, 2022 tr at 346). Because stutter tends to be seen at a higher rate in low copy number DNA sample, it makes interpreting low copy number DNA sample more difficult than a high copy number DNA sample (March 21, 2022 tr at 348).
Spike in voltage, which Dr. O'Connor does not consider to be a stochastic effect, is a machine artifact, and is seen with both high template and low template samples (March 21, 2022 tr at 348). The machine voltage is increased for low copy number samples, but as far as Dr. O'Connor was aware of, this does not cause more spikes (March 21, 2022 tr at 348-349). A dye blob is where the dyes are attached to the amplified products, and the additional dyes run through the capillary electrophoresis and they show up on the electropherogram (March 21, 2022 tr at 349). In Dr. O'Connor’s opinion, dye blobs do not make interpreting low copy number samples more difficult than high copy number samples (March 21, 2022 tr at 350). In any event, dye blobs are rare, and it is something that analysts are able to account for. Id.
In 2007, there was a case in the UK commonly called the Omagh Bombing case with a defendant named Sean Hoey (March 21, 2022 tr at 381-383). In that case, the court excluded the evidence of LCN DNA testing. Id. This led to LCN DNA testing being suspended in the UK for a short period of time (March 21, 2022 tr at 384). However, Dr. O'Connor testified that after an internal review of the LCN DNA testing in the UK, they ultimately found that the methodologies and processes were sound, and reinstated it (March 21, 2022 tr at 382, 384). In April of 2008, a report by the forensic science regulators in the UK was published in response to the British government's request to examine the LCN methods that were used in the UK at the time (March 21, 2022 tr at 385). The Caddy report stated that at the time, there was not yet a legal and scientific consensus regarding the quality of the data presented by analysis of low concentration DNA STR profiles (March 21, 2022 tr at 386). In addition, the Caddy report stated that the questions of how and when to apply statistical methods did not reach a clear consensus (March 24, 2022 tr at 397). When Dr. O'Connor joined the OCME LCN group in September of 2008, as part of his training, the Caddy report was one of the published papers he had to reference (March 21, 2022 tr at 393). It was brought out during re-direct examination that the Caddy report did conclude that the science and the methodology of low template testing, including those using increased cycle numbers, are sound and reliable, and the three companies (the Forensic Science Service Ltd., LGC Forensics and Orchid Cellmark Ltd.) that did that testing produced more reliable results (May 31, 2022 tr at 854-855).
This report is commonly referred to as the Caddy report as the lead author was Professor Brian Caddy. While this report was being prepared, LCN DNA testing was suspended at FSS for about a month (May 31, 2022 tr at 857-858).
In 2009, FBI disallowed LCN profiles from being uploaded to the national DNA index system databank (March 24, 2022 tr at 402). Dr. O'Connor has heard a couple of people say that LCN is fit for use in unidentified person's cases but not for criminal cases (March 24, 2022 tr at 437). As far as Dr. O'Connor was aware of, out of the eight public New York laboratories, OCME is the only laboratory to perform LCN methodologies for criminal court cases (March 24, 2022 tr at 442). While Dr. O'Connor does not know every laboratory's procedure, he was not aware of any lab in the United States or elsewhere that implemented OCME's high sensitivity DNA methodology for criminal cases (March 24, 2022 tr at 443-444).
Identifiler's amplification kit user manual states that the kit has been optimized to reliably amplify and type approximately 500 picograms to 1.25 nanograms of sample DNA (March 24, 2022 tr at 457-459). The manual does not say that Identifiler is reliable at 20 picograms (April 6, 2022 tr at 542). In addition, OCME has validated the use of the kit to run with five microliters of DNA extract, whereas the manual calls for ten microliters of DNA extract. Id. The manual does not state to increase the voltage as OCME does on some LCN cases, and in fact, does not make a statement endorsing OCME's LCN protocols (April 6, 2022 tr at 543-544). However, the manual does state that each laboratory using the kit is to perform appropriate validation studies before using it (March 24, 2022 tr at 460). Dr. O'Connor testified that based on the laboratories’ needs and their environments, each laboratory will validate and customize the kits, and will not follow the manufacture's recommendation explicitly (May 31, 2022 tr at 859-860). In Dr. Connor's opinion, a laboratory should perform validation studies based on their accreditation and best practices in the community before they use a kit on case work, as opposed to buying a kit and just using it without independent validation studies (May 31, 2022 tr at 863). Such validation studies are required in order to meet the accreditation standards. Id.
One nanogram is a thousand picograms.
Dr. O'Connor testified that Dr. Mitchell Holland is a well-respected scientist in the forensic DNA field, and has done research on LCN (April 6, 2022 tr at 554). Dr. O'Connor was deposed in a civil case where Marina Stajic, a former employee at OCME, had sued OCME. Id. During that deposition, Dr. O'Connor was shown an email between Timothy Kupferschnid, Chief of Labs and Director of the Forensic Biology Department at OCME, and Dr. Holland (April 6, 2022 tr at 554-555). In that email, Mr. Kupferschnid asked Dr. Holland to give a statement that LCN was generally accepted, and Dr. Holland responded that he thinks "it is fair to say that there isn't ‘general acceptance’ of the ‘LCN’ approach in the community, depending on how you frame things" (April 6, 2022 tr at 559).
On re-direct, it was brought out that in 2009, Dr. Holland was a witness for the People at a Frye hearing in the case of People v. Megnath (May 31, 2022 tr at 875-876). In the PowerPoint Dr. Holland had prepared for that hearing, it was stated that "[v]alidation studies have clearly illustrated that increasing the number of cycles from 28 to 31, along with the appropriate laboratory and interpretation method, produces results that can be reliably reported in criminal cases. Other studies have illustrated that cycle number can be increased to 34 cycles and still produce results that can be reliably reported." (May 31, 2022 tr at 879). Dr. Holland also stated that it was clear from validation studies that input amounts of DNA far less than the kit recommended 0.5 to 1.25 nanograms can produce reliable results (May 31, 2022 tr at 880). In addition, Dr. Holland's PowerPoint stated that "any suggestion that only a few laboratories in the U.S. and/or the world are doing LCN STR analysis is a misrepresentation of reality" (May 31, 2022 tr at 881). Dr. Holland also stated that most laboratories use some version of LCN STR techniques, and Dr. O'Connor stated that this is the same conclusion that Dr. Butler made in his book (May 31, 2022 tr at 883).
Dr. Bruce Budowle, who was a senior scientist in the DNA Unit at the FBI laboratory and a pioneer of modern day STR testing, co-authored an article in 2001 regarding some limitations and considerations regarding LCN (April 7, 2022 tr at 576-577, 584). In the article, Dr. Budowle wrote that the current multiplex STR typing strategies were sufficiently sensitive to detect alleles in the LCN range without further modification (April 7, 2022 tr at 577). Dr. O'Connor agreed with that statement, but further stated that to deal with the increased stochastic effects, they moved to LCN methodology. Id. Dr. Budowle also stated that LCN typing is not reliable for mixture analysis and confirmation by mixture. Id. However, Dr. O'Connor testified that by accounting for stochastic effects and modifying the interpretation process, one can analyze a mixture and confirm a mixture using the LCN technique. Id. Dr. O'Connor disagreed with Dr. Budowle that LCN cannot be used for excluding someone as a suspect in a touch DNA case, because if you have a sample where the comparison alleles are not seen in that mixture, one would exclude that person from that sample (April 7, 2022 tr at 578). Dr. Budowle stated that it is difficult to validate LCN typing because results are often not reproducible (April 7, 2022 tr at 579-580). Dr. O'Connor testified that if you are expecting to get the same exact alleles at the exact same peak heights every time you amplify a sample, neither LCN nor high copy number would be considered reproducible either (April 7, 2022 tr at 580). However, Dr. O'Connor testified that LCN is reproducible in the sense that the resulting conclusions from running the sample multiple times would be the same. Id. Dr. O'Connor did agree with Dr. Budowle that reagents are usually not subject to quality control at the conditions prescribed for LCN typing. Id. Dr. O'Connor stated that this is why OCME took that into account and did extra quality control of the reagents once they received them in-house to ensure that they were meeting the LCN criteria. Id. Dr. O'Connor agreed with Dr. Budowle that with LCN, peak imbalance or stutter products increase, and contamination is a greater concern with LCN testing (April 7, 2022 tr at 581). Dr. Budowle, Dr. Arthur Eisenberg and Dr. Angela van Daal co-authored an article in 2009 titled, "Low Copy Number Typing Has Yet to Achieve ‘General Acceptance.’ " (April 7, 2022 tr at 594). This article was a part of a series of articles going back and forth with OCME (April 7, 2022 tr at 596). In this article, it was stated that OCME had not implemented an interpretation protocol consistent with their validation findings (April 7, 2022 tr at 596-597). The three doctors then co-authored another article with the same title in 2010 stating that the manners in which LCN typing is carried out in laboratories in the UK, New Zealand and at OCME were not known because protocols had not been disclosed (April 7, 2022 tr at 601-602). Dr. O'Connor stated that OCME's protocols were online at that point and the validation summary published in the Croatian Medical Journal included the protocols used by OCME for casework (April 7, 2022 tr at 602). Dr. O'Connor testified that the issues listed in the articles associated with low template amounts were considered by OCME during their validation studies and accounted for when coming up with the modified procedures and interpretation protocols (May 31, 2022 tr at 888).
When this article was written, Dr. Budowle and Dr. Eisenberg were at the University of North Texas and Dr. van Daal was affiliated with a university in Australia but was taking her sabbatical at the University of North Texas (May 31, 2022 tr at 887-888)
In addition to OCME responding to the Budowle et al. article, Dr. Peter Gill and Dr. John Buckleton published an article in 2010 in response to it (May 31, 2022 tr at 894-895). In Dr. O'Connor’s opinion, Dr. Gill and Dr. Buckleton are considered pioneers in LCN testing methodology (May 31, 2022 tr at 895). The article stated that the Budowle et al. article presented views that are inadequately precise, demonstrated a lack of appreciation of underlining principles and did not aligned with the broader scientific opinion (May 31, 2022 tr at 896). Dr. Gill and Dr. Buckleton further stated that LCN DNA testing performed by OCME was generally accepted as reliable in the forensic scientific community. Id.
As far as Dr. O'Connor was aware of, the only articles written criticizing OCME's use of LCN were authored by Dr. Budowle, Dr. van Daal and their colleagues (May 31, 2022 tr at 890-891). Dr. O'Connor was not aware of any other articles criticizing OCME's LCN protocols (May 31, 2022 tr at 891).
Dr. John Butler's "Advanced Topics in Forensic DNA Typing : Interpretation," which was published in 2015, is a well-regarded treaty and a textbook Dr. O'Connor uses a lot (April 22, 2022 tr at 649). Dr. O'Connor agreed with the statement in the book that in 2012, for complex DNA profiles, there was no predominant or overarching standard interpretation method (April 22, 2022 tr at 650). In another opinion piece, Dr. Butler wrote that "[d]ata interpretation uncertainties are highest and errors are most likely to be made in situation with DNA mixtures from three or four individuals, especially with low-template DNA, ‘touch’ samples," which Dr. O'Connor, in general, agreed (April 22, 2022 tr at 655). Dr. O'Connor also agreed with Dr. Butler that when working with such samples, DNA detection sensitivity must be increased, and with increased sensitivity comes the need for greater responsibility in data interpretation. Id. Dr. O'Connor stated that this is why the validation studies and modification in protocols are needed. Id. Dr. O'Connor also agreed with Dr. Butler that "inconsistences with handling DNA interpretation of complex mixtures adds to the challenge of obtaining reproducible results from multiple analysts and/or forensic laboratories." Id.
A negative control is something that is expected to give a negative result so that you would know that your reagents and chemicals are working correctly, and it is also a way to detect contamination in the process (April 22, 2022 tr at 662). OCME's protocol is that if you see up to nine non-repeating alleles in the negative controls, you should fail at that locus (April 22, 2022 tr at 663). However, while that means you are allowed up to nine non-repeating peaks in the negative control before it fails, it does not mean that if you see less than that, it cannot be failed (April 22, 2022 tr at 670). Dr. O'Connor testified that it really depends on the context. Id.
Dr. O'Connor was aware that Dr. Eli Shapiro has been critical about FST, LCN and other matters dealing with OCME (April 22, 2022 tr at 679). One of Dr. Shapiro's observations was that OCME overstated LCN performance relative to 28-cycle testing in that if the LCN technique was better at preventing drop-out, it should show less drop-out than the 28-cycle protocol (April 22, 2022 tr at 680). However, O'Connor testified that LCN is not better in preventing drop-out, but it is the methodology that takes into account the increase in drop-out (April 22, 2022 tr at 680-681).
Dr. Zoran Budimlija was a research scientist with OCME and one of the scientists that worked on LCN's initial validation, but no longer works at OCME (April 29, 2022 tr at 725). Dr. Budimlija was a plaintiff's witness in a civil case against the city, and he had stated that the LCN validation did not establish that LCN was reliable for DNA mixtures below 25 picograms (April 29, 2022 tr at 726-727). Dr. O'Connor was asked whether the LCN protocols allow a sample to be amplified if template controls are greater than 0.1 picograms per microliter (April 29, 2022 tr at 728-729). Dr. O'Connor answered that in that case they would quantify the sample again and if it still came back above 0.1 because of the limited amount of sample in the sample table, the protocols allowed the analyst to move forward with amplification (April 29, 2022 tr at 719). The quantitation is repeated because the 0.1 picogram per microliter may be an indication of contamination. Id. For LCN, due to the increased cycles and increased sensitivity, drop-in alleles seen in the extraction do not equate to gross contamination, and therefore, the interpretation protocols were modified to account for that. Id. If this were a high copy number sample, you are required to re-quantitate or fail the sample. Id.
After the LCN's triplicate amplification, in order to determine a profile, the allele needs to be seen at least two out of the three to be assigned to that profile (April 29, 2022 tr at 749-751). However, when comparing a suspect profile with a non-deducible mixture sample, meaning a profile was not determined, to see if the suspect is included or excluded as a contributor, you would take into account alleles that only appear in one of the amplifications. Id.
From 2006 to March of 2022, OCME testified in trials about LCN approximately 493 times, including federal courts and jurisdictions outside New York City (March 15, 2022 tr at 216). In addition, between 2006 and 2016, OCME processed approximately 9,571 samples using LCN testing (March 15, 2022 tr at 206-207). As of 2015, OCME had used LCN testing for at least 15 post-conviction/Innocence Project cases (March 11, 2022 tr at 208).
In 2016, OCME stopped doing LCN testing with Identifiler on new cases and switched to Promega's Fusion amplification kit (March 15, 2022 tr at 218).
Dr. O'Connor testified that the University of North Texas is one of the premier missing persons laboratories in the country, and many highly reputable people work there (June 1, 2022 tr at 980). A few scientists that work at the University of North Texas are critical of the LCN methodology (June 2, 2022 tr at 986). As far as Dr. O'Connor was aware of, University of North Texas uses additional cycles on testing on their missing person's cases and identification of human remains cases, and even Dr. Budowle endorsed it for that use (June 2, 2022 tr at 986-987). One of Dr. Budowle's reasons for using LCN for missing persons cases was because those cases often do not involve mixtures (June 2, 2022 tr at 994).
It was Dr. O'Connor’s opinion that LCN DNA testing is based on methodologies that are grounded and generally accepted in the field, and its validation covered things that are recommended and required based on OCME's accreditation standard and OCME's best practice within the field of forensic DNA analysis (March 17, 2022 tr at 324). In addition, Dr. O'Connor testified that the procedures are based on extensive validation that was reviewed and accepted not only by the accrediting bodies, but also in peer-reviewed journals and conferences. Id.
A probabilistic genotyping software is a tool used by an analyst to help with the conclusion by putting probabilities to different possibilities of genotypes in a sample and then calculating a statistic which is usually a likelihood ratio (March 15, 2022 tr at 245-246). A likelihood ratio statistic is a ratio of two probabilities, one where the person of interest is a contributor to the mixture and the other where the person of interest is not a contributor and an unknown person is, to tell you which scenario is more likely based on the STR data that was generated during the testing processes (June 2, 2022 tr at 1016-1017). There are different types of genotyping software, such as, binary, semi-continuous and fully continuous (June 2, 2022 tr at 1017). Binary is the simplest in that it calculates whether the allele is absent or present. Id. For semi-continuous, you take into account some of the other biologic phenomenon, such as, drop-in, drop-out and stutter ratios (June 2, 2022 tr at 1018). For fully continuous, you take into account even more biological aspect of the sample, such as, peak height, peak height ratios, allelic and amplification efficiencies, among other things. Id. The fully continuous is typically done with a computer software that runs a simulation called a Markov chain Monte Carlo ("MCMC"). Id. There are currently about eight to ten versions of probabilistic genotyping software (March 15, 2022 tr at 247).
STRmix, that OCME is currently using, is fully continuous.
When an analyst is interpreting a DNA sample, the typical best practice is to estimate the number of contributors to that sample by looking at the sample as a whole, meaning all the alleles at every location (March 15, 2022 tr at 236). SWGDAM's short tandem repeat interpretation guidelines, dated July of 2000, did not require a statistical calculation for every positive association between a crime scene sample and a known sample (March 15, 2022 tr at 237-239). OCME's standard operating procedure on how to estimate the number of contributors to a sample that is best described by the data is in general, consistent with SWGDAM's guidelines dated January 14, 2010 (March 15, 2022 tr at 239-241). Once an analyst determines the number of contributors to the sample based on these protocols, the analyst would manually review and compare it with the profile of a person of interest to determine whether the person is included as a potential contributor or not (March 15, 2022 tr at 242). SWGDAM's recommendations, dated January 14, 2010, stated that once there is any positive association between an individual and a sample, then a statistic must be performed in order to show the weight of the association (March 15, 2022 tr at 243).
Prior to 2010, SWGDAM did not come out with guidelines or recommendations for using a statistical calculation when a positive association was made in a mixture sample. However, around 2010, various organizations, including SWGDAM, came up with such guidelines (March 15, 2022 tr at 243-244).
Likelihood ratios were not widely used until 2009 or 2010 when the National Academy of Science report and the SWGDAM guidelines for interpretation of DNA mixtures came out (June 2, 2022 tr at 1013-1014). In those two documents, they were recommending that statistics should be applied and accompany all positive associations, instead of just giving a qualitative conclusion of inclusion (June 2, 2022 tr at 1014). They recommended the use of a combined probability of inclusion ("CPI") or a likelihood ratio. Id. Following those two documents, International Society of Forensic Genetics ("ISFG") also recommended the use of a statistic to accompany positive associations. Id. ISFG recommended the likelihood ratio over the CPI because more data is being used from the actual sample and it can incorporate drop-out and drop-in into the calculation (June 2, 2022 tr at 1014-1015).
Prior to adopting FST, if there was a single source sample or a mixture where the analyst could deconvolute one or more of the contributors, OCME applied the random match probability statistic, which estimated how rare that profile is in the population (March 15, 2022 tr at 251 and June 2, 2022 tr at 1015). If it was a non-deconvolutable mixture or you could not come up with a profile for a part of it, the analyst would do a qualitative assessment as to whether or not the person of interest was included or excluded as a possible contributor (June 2, 2022 tr at 1015-1016).
OCME used FST to calculate a likelihood ratio when there was a DNA mixture where a distinct profile cannot be convoluted (February 28, 2022 tr at 84-85). The likelihood ratio is set up with one scenario where the person of interest is part of that mixture and the second scenario where the contributors are two unknown unrelated contributors (February 28, 2022 tr at 85). In Dr. O'Connor’s opinion, FST is not a part of the actual DNA testing process (March 15, 2022 tr at 248). Once all the steps of DNA testing are done and the analyst interprets it and concludes that an individual is included as a possible contributor, then the likelihood ratio would be calculated using FST. Id. However, the overall conclusion of whether the person was included or not would be made by the analyst. Id.
OCME started developing FST in 2009 and it went online 2011 (March 15, 2022 tr at 247-248). FST was developed in-house at OCME, and the development was led by Dr. Theresa Caragine and Dr. Adele Mitchell (March 15, 2022 tr at 248). Dr. O'Connor testified that there were other laboratories that used a probabilistic genotyping program before 2010 (March 21, 2022 tr at 359). For instance, the FSS or the UK used a program called LoComatioN from around 2007, which was the program OCME based their FST off of but Dr. O'Connor was not certain whether FSS used it on casework (March 21, 2022 tr at 359-360).
Once FST was developed, an internal validation was conducted from 2009 to 2011 (March 15, 2022 tr at 249-250). FST was validated for all sample types ranging from one contributor to three contributors (March 15, 2022 tr at 250). SWGDAM came out with probabilistic genotyping software validation guidelines in 2015, after OCME validated FST (March 15, 2022 tr at 251-252). However, it was Dr. O'Connor’s opinion that OCME's validation of FST comported with the recommendations included in SWGDAM's 2015 guidelines (March 15, 2022 tr at 253).
Once OCME completed its FST validation, OCME appeared before the DNA Subcommittee a total of four times to present the validation material (March 15, 2022 tr at 254, June 3, 2022 tr at 1067). The Subcommittee holds quarterly meetings, and FST validation was presented and discussed at four separate meetings. Id. OCME provided or presented to the Subcommittee drop-in and drop-out rate data, the statistical methods that were going to be used by the program, and the logic in how the computational flow was going to be done by the program (June 3, 2022 tr at 1067-1068). In addition, manual calculations were shown that verified the output from the program (June 3, 2022 tr at 1068). Furthermore, the user manual and the validation results including reproducibility, sensitivity and concordance were provided. Id. From OCME, Dr. Mitchell, Dr. Prinz and Dr. Caragine presented the FST to the Subcommittee. Id. Dr. O'Connor was aware that the Subcommittee members had posed questions and made comments about FST (June 3, 2022 tr at 1069).
On November 13, 2009, FST was first presented to the DNA Subcommittee, and Dr. Prinz and Dr. Mitchell presented the plans for FST and how OCME was going to proceed with the development and validation of the program (June 3, 2022 tr at 1072-1073). The minutes from that meeting stated that the information did not require a vote and was informational only (June 3, 2022 tr at 1073). On March 5, 2010, Dr. Mitchell presented to the DNA Subcommittee the aspects of FST and how the validation and development were going (June 3, 2022 tr at 1074). This presentation included the discussion of degraded samples (June 3, 2022 tr at 1075). The minutes from this meeting stated that Dr. Mitchell had stated that OCME had not yet completed its validation and requested feedback from the members regarding their views (June 3, 2022 tr at 1076).
The third presentation to the Subcommittee was on May 19, 2010 (June 3, 2022 tr at 1077). Prior to the third meeting, Dr. Chakraborty, one of the members of the Subcommittee had a question about the independence of the loci when it came to drop-out rates (June 3, 2022 tr at 1077-1078). In response, OCME conducted additional conditional testing that was done to test for possible independence or dependence of drop-out rates per locus, meaning seeing if there is any relationship between the rate of drop-out at locus one versus the rate of drop-out at locus two (June 3, 2022 tr at 1078). This issue was addressed at the third presentation by Dr. Mitchell who stated that based on the tests, there was no pattern or no consistent dependence of the drop-out rate of a locus in comparison to another locus (June 3, 2022 tr at 1078-1079). After this presentation, this issue was never brought up again by the DNA Subcommittee (June 3, 2022 tr at 1079). The meeting minutes from the third presentation stated that the Subcommittee had stated that more work was required before they could vote on this method and the Subcommittee members gave some suggestions to OCME for additional work they would like to see completed regarding the independence testing (June 3, 2022 tr at 1081, 1083).
On October 8, 2010, FST was presented again to the DNA Subcommittee, and the Subcommittee reviewed and evaluated OCME's FST and offered a binding recommendation to the Commission of Forensic Science that its use by OCME be approved for forensic casework (June 3, 2022 tr at 1084-1085). The DNA Subcommittee sent a letter, dated October 9, 2010, to the Forensic Science Commission documenting their approval of FST (June 3, 2022 tr at 1086). The Forensic Science Commission met on December 7, 2010 and voted and approved the use of FST (June 3, 2022 tr at 1088-1089). Procedurally, the Commission was supposed to send a letter to OCME documenting this, but due to a clerical oversight, this letter was not sent out until December 16, 2011 (June 3, 2022 tr at 1089-1090).
Once FST was validated both developmentally and internally, protocols were formulated that the lab must follow in order to use the program (March 15, 2022 tr at 266). When an analyst is making a determination as to whether someone is included in a mixture, the analyst is comparing the entire profile of the individual to the entire profile of the sample, meaning the conclusion is not just based on one allele at one location, but by looking at all of the locations as a whole (March 15, 2022 tr at 268-269). When comparing a non-deducible mixture to a known sample, the conclusion that the individual is included as a possible contributor would be the strongest conclusion of inclusion (March 15, 2022 tr at 270). OCME's FST standard operating protocols contained specific criteria that has to be met in order to run the FST program (March 17, 2022 tr at 278). First and foremost, a qualitative assessment of the sample must have been done and the conclusion of being included as a possible contributor must have been made (March 17, 2022 tr at 278). If the analyst concludes that the individual was excluded as a possible contributor, no likelihood ratio would be calculated (March 17, 2022 tr at 283). In addition, either part or all of the mixture must not be deconvoluted into a distinctive DNA profile (March 17, 2022 tr at 278-279). Then, if a statistic is needed, a likelihood ratio can be calculated at that point. Id. The protocols discussed the input amount for the types of samples, for example, for low copy number, it would be those that are less than 100 picograms in the amplification (March 17, 2022 tr at 279-280). In addition, the analyst would input the specific allele data from the mixture and choose a scenario or a hypothesis based upon the number of contributors to that mixture. Id.
OCME has given lectures, presentations and workshops involving FST on many occasions (June 2, 2022 tr at 1004, June 3, 2022 tr at 1066). OCME fully validated FST in 2011 and published the validation summary in 2012 in a scientific journal, which was also presented at a forensic science conference (June 2, 2022 tr at 1005). The validation is over 24 volumes worth of information, so OCME created an executive summary, which was a concise overview of the entire validation, and gave it to the DNA Subcommittee to review along with other validation material (June 3, 2022 tr at 1091). The executive summary included a reference to each of the volumes of the validation itself. Id.
In 2006, the International Society of Forensic Genetics ("ISFG") published an article in Forensic Science International (March 17, 2022 tr at 289). Dr. O'Connor testified that you need to apply to be a part of ISFG, and in his opinion, the scientists in this society are a part of the relevant scientific community (March 17, 2022 tr at 301). There were approximately ten authors from ISFG, including Peter Gill and Charles Brenner, who are in Dr. O'Connor’s opinion, the two pioneers of all the forensic statistics that is used in DNA analysis (March 17, 2022 tr at 292). This article did not discuss FST specifically, but it stated that the advantage of a likelihood ratio framework was that stutter and drop-out can be assessed probabilistically (March 17, 2022 tr at 291, 295). Dr. O'Connor testified that since this article in 2006, the majority of the community has moved towards using likelihood ratios as a form of statistics when doing mixture interpretation and adding weight to the conclusion (March 17, 2022 tr at 297). In Dr. O'Connor’s opinion, this would include using probabilistic genotyping software, such as FST. Id.
In 2012, ISFG published another article in the Journal of Forensic Science International: Genetics, a journal affiliated with ISFG (March 17, 2022 tr at 298-299). The authors of this article were similar to those of the 2006 article, and included Dr. Gill (March 17, 2022 tr at 300). The article stated that the probability of drop-out can be estimated by logistic analysis or by using an empirical approach, and referenced a paper OCME had published outlining the development and validation of FST (March 17, 2022 tr at 302).
In 2011, an article titled, "Estimating Drop-out Probabilities in Forensic DNA Samples: A Simulation Approach to Evaluate Different Models," was published in the Journal of Forensic Science International: Genetics (March 17, 2022 tr at 303-304). There were five authors including Dr. Gill, and others from France and Norway, who in Dr. O'Connor’s opinion were experts in the field of DNA analysis and forensic statistics (March 17, 2022 tr at 306-307). The article stated that "the likelihood ratio framework is the preferred approach to report the weight of DNA evidence" (March 17, 2022 tr at 307-309).
In the fall of 2016, OCME notified its customers (i.e., the district attorney's offices, NYPD and defense organizations) about three new technologies it would be using: (1) an amplification kit called PowerPlex Fusion ("Fusion"); (2) a genotyping software called GeneMarker and (3) a probabilistic genotyping program called STRmix (March 17, 2022 tr at 308-309). In 2016, CODIS had announced that starting 2017, CODIS would increase the number of core loci needed in order to upload to the national database from 13 to 20 locations (March 17, 2022 tr at 310). Identifiler, the kit that OCME was using at the time, was not able to meet that core requirement (March 17, 2022 tr at 311). Therefore, OCME along with most labs in the country had to implement a new kit. Id. OCME validated Fusion to be used with 29 cycles, and for it to be able to test 24 locations (March 17, 2022 tr at 311-312). In addition, the minimum total DNA input validated for the Fusion kit was 37.5 picograms, and this is the range that can be used with standard testing without having to employ additional cycles or an LCN interpretation process (March 17, 2022 tr at 312-313). The Fusion range covered most of the LCN range that was being used. Id. It was brought out during cross-examination that Fusion is more sensitive than the Identifiler kit (March 21, 2022 tr at 364). Dr. O'Connor testified that the adoption of Fusion did not invalidate the reliability of the LCN technique OCME had been using (March 17, 2022 tr at 313). The fact that Fusion can go down to that range just shows an advancement in kit technology and does not invalidate what was done before. Id.
As for FST, OCME had validated FST to be used on Identifiler samples. Id. Therefore, OCME had to revalidate FST in order to use it on samples that were amplified with Fusion. Id. However, instead of investing the resources and time to revalidate FST, OCME decided to use a commercially available kit called STRmix (March 17, 2022 tr at 313-314). In addition, STRmix is a fully continuous probabilistic genotyping software, compared to FST that is semi-continuous, so it takes into account more information from the DNA sample as it calculates the different probabilities and the likelihood ratio (March 17, 2022 tr at 315). When OCME went online with FST, STRmix was not commercially available (March 17, 2022 tr at 314). Since FST was validated to be used with the Identifiler kit, even today, if a sample that was amplified with Identifiler needs a statistic, FST would be used (March 17, 2022 tr at 315 and June 2, 2022 tr at 1024). For instance, although OCME stopped using Identifiler on new cases in 2017, if a comparison sample came in today that needed to be compared to an older sample that was amplified using Identifiler, FST would be utilized in that situation (June 2, 2022 tr at 1024). In September of 2017, the Legal Aid Society and the Federal Defender Service made an official complaint to the New York State Inspector General's office against OCME claiming that OCME was engaging in negligent conduct and malfeasance with the way they were performing DNA analysis, specifically referring to the use of LCN and FST (March 17, 2022 tr at 315-316). This complaint was referred to the Commission on Forensic Science, and in response, the DNA Subcommittee reviewed LCN and FST again (March 17, 2022 tr at 320). On December 4, 2017, the DNA Subcommittee wrote a letter to the Commission stating that there was no significant malfunction as asserted in the letter to the Inspector General (March 17, 2022 tr at 321-322). The letter stated that based on OCME's validations, LCN could be used in potentially identifying a major contributor to a DNA mixture, and can be used with 31 cycles (March 17, 2022 tr at 322-323). As far as Dr. O'Connor is aware of, the Commission never restricted OCME from using LCN testing or FST software (March 17, 2022 tr at 323-324).
From 2013 to 2022, OCME testified approximately 251 times regarding FST results in state and federal courts (March 17, 2022 tr at 288). It was Dr. O'Connor’s opinion that FST is a tool that is grounded in well-studied, well-published and accepted methodologies of likelihood ratio frameworks, which includes the probability of drop-in and drop-out under the umbrella of semi-continuous probabilistic genotyping systems (March 17, 2022 tr at 325). The interpretation protocols developed were based on extensive validation, and therefore, it was Dr. O'Connor’s opinion that the results are reliable and accepted methodologies throughout the community. Id.
Dr. Ranajit Chakraborty was on the New York State DNA Subcommittee from July 1995 to September 2011 and had voted to approve both LCN and FST (April 7, 2022 tr at 621). However, Dr. O'Connor was aware that once Dr. Chakraborty went to the University of North Texas, he along with Dr. Budowle authored articles criticizing LCN (April 7, 2022 tr at 620). Dr. Chakraborty also changed his views regarding FST and stated that with the knowledge he had later on, he would have voted against approving those methodologies (April 7, 2022 tr at 621).
Dr. Chakraborty is now deceased.
In 2008, there was a letter to the editor in the International Journal of Legal Medicine, which was authored by Dr. Dan Krane and individuals that were not scientists (April 7, 2022 tr at 622-623). It is stated in that letter that it was difficult to see how a forensic technique ["LCN"] could be deemed adequately validated for use in the courtroom when there was not a consensus on how its results should be interpreted (April 7, 2022 tr at 623). Dr. O'Connor disagreed with that statement. Id. The article also stated that stochastic effects reduce the weight that can be attached to the findings of an LCN DNA profile match, to which Dr. O'Connor stated that that would depend upon whether there were modified interpretation procedures to account for those stochastic effects (April 7, 2022 tr at 625).
OCME published two papers describing the FST validation (April 29, 2022 tr at 752). Dr. O'Connor was not aware of any studies published about FST or any peer-reviewed articles about FST. Id. No laboratory other than OCME has tested FST (April 29, 2022 tr at 763). FST was put into casework at OCME in 2011 and since then, there have been two subsequent versions of FST (April 29, 2022 tr at 764). The last version of FST came out either in 2012 or 2013. Id. OCME now uses STRmix on any case that has been amplified with the Fusion amplification kit that went online in 2017 (April 29, 2022 tr at 767). FST has not been validated to be used with Fusion samples. Id.
OCME lowered the drop-out rate with FST because it believed that a lower drop-out rate will result in a more conservative likelihood ratio (April 29, 2022 tr at 799). FST does not consider the allele's height in determining drop-out (April 29, 2022 tr at 818).
During FST validation, OCME estimated drop-out rates for single source template quantities ranging from 6.25 to 500 picograms, and for mixtures from 25 to 500 picograms (May 23, 2022 tr at 828-829). For evidence samples with DNA template quantities that fell between those drop-out estimation, FST interpolates to determine the appropriate rate to use. Id. It was brought out during cross-examination that a grant proposal titled, "Development of Forensic Statistics for Small or Compromised Evidence Samples," submitted to the National Institute of Justice ("NIJ") stated that OCME will create and evaluate two, three and four-person low template mixtures containing 125, 100, 75, 50, 25, 12.5 and 6.25 picograms of DNA (June 13, 2022 tr at 1133-1135). The grant proposal also stated that OCME will make the software available to all public forensic laboratories, and for laboratories that use different instrumentation and protocols, OCME will construct a simple parameter conversion matrix (June 13, 2022 tr at 1135, 1147). However, FST was not made available to all public forensic laboratories (June 13, 2022 tr at 1147-1148). In addition, this conversion matrix was not mentioned in the FST validation papers (June 13, 2022 tr at 1150-1151).
This grant proposal was submitted to NIJ, but was not accepted (June 21, 2022 tr at 1304).
Organization of Scientific Area Committee ("OSAC") is an organization from the National Institute of Standards and Technologies that was formulated to come up with standards for the forensic DNA community as well as other disciplines (February 18, 2022 tr at 45). OCME is not required to follow the OSAC standards for their accreditation (May 23, 2022 tr at 832). However, OCME is currently reviewing the OSAC's validation standards to see if there are any standards they would adopt (May 23, 2022 tr at 834-835).
As part of OCME's accreditation process, the laboratory is routinely audited (March 15, 2022 tr at 222). Laboratory auditors will check, among other things, protocols and procedures, education of staff, the quality assurance manual and whether proper validations were performed (May 23, 2022 tr at 842-843). As for validations, the auditors are tasked to ensure that the validations met all the standards prior to putting the procedure online for casework (May 23, 2022 tr at 845). However, auditors will not perform actual testing using a software tool (May 23, 2022 tr at 844). As far as Dr. O'Connor was aware of, after the audits, OCME's LCN and/or FST validations or their protocols were never found to have not met the standards (May 31, 2022 tr at 919-920).
The National Forensic Science Training Center ("NFSTC") conducted an external audit of OCME from July 30, 2012 to August 2, 2012, which was after FST was brought online (June 1, 2022 tr at 936-938, June 3, 2022 tr at 1093-1097). After the audit, NFSTC sent a letter to OCME, dated January 4, 2013, stating that "the audit team determined that the Forensic Statistical Tool was not novel and that it met the requirements of a software modification," and that no software upgrades were made since the last external audit, which was in 2010. Id. Dr. O'Connor agreed with Dr. Mike Coble, who used to work at NIST and is now a professor at the University of North Texas, that the auditors are not taking the protocol and testing out the protocols themselves to see if they are producing the results that they should be producing (June 1, 2022 tr at 941-942). Dr. O'Connor testified that auditors are looking to see whether there is a protocol that meets the standard (June 1, 2022 tr at 942). It was brought out during re-direct that the 2012 audit was a four-day audit conducted by fourteen auditors, many of whom were well established forensic biologists or analysts around the country (June 2, 2022 tr at 989-991). The auditors do review case files to ensure that the protocols are being applied correctly (June 2, 2022 tr at 992). For the 2012 external audit, there were no findings for OCME. Id.
It was Dr. O'Connor’s opinion that comparing a semi-continuous probabilistic genotyping software with a continuous software is a little disingenuous because the semi-continuous approach only takes into account the alleles that are present and then incorporates drop-in and drop-out whereas with a fully continuous approach, many more aspect of that DNA profile is used to help the analyst out in interpreting the sample (June 2, 2022 tr at 1020-1021). It was Dr. O'Connor’s opinion that comparing FST to STRmix is not very useful because of the way the two systems differ (June 2, 2022 tr at 1021). It was brought out during cross-examination that OCME has not done any studies comparing FST to other semi-continuous programs (June 13, 2022 tr at 1173). However, Dr. O'Connor testified that while such comparison can be useful, not doing these comparisons does not mean that the program is not reliable (June 13, 2022 tr at 1184-1185). In reviewing the definition of validation in the FBI Quality Assurance Standards that was in effect in 2011, Dr. O'Connor testified that comparing a software to a particular software platform is not required as part of the validation process (June 21, 2022 tr at 1316-1317).
A 2014 article co-authored by Todd Bille of the Bureau of Alcohol, Tobacco, Firearms and Explosives, John Buckleton, Jo-Anne Bright, and others comparing the effectiveness of binary, semi-continuous (LabRetriever) and continuous models (STRmix) was entered into evidence, and Dr. O'Connor agreed with the article that accounting for more biological processes taking place gives you more effective use of the data (June 13, 2022 tr at 1170-1174). However, Dr. O'Connor stated that the use of a semi-continuous system is not ineffective, although it is less effective when compared to a fully continuous system that uses more information from the sample (June 13, 2022 tr at 1175-1175).
Around 2009 and 2010 when OCME was developing FST, there was only one probabilistic genotyping software, TrueAllele, that was commercially available. (Id ., June 13, 2022 tr at 1130-1131). Today, the majority of labs have moved to using likelihood ratio calculations following the National Academy of Science report, SWGDAM guidelines and ISFG guidelines (June 2, 2022 tr at 1022). At OCME, all criminalists levels two and above were trained in the use of FST (June 2, 2022 tr at 1024).
When OCME developed FST, they used empirically derived drop-in and drop-out rates, meaning they physically amplified over 2,000 known donor samples and counted how often drop-in and drop-out were seen, and used that in the program itself (June 2, 2022 tr at 1025, 1028). The drop-in and drop-out rates were adjusted based on the type of sample, whether it was a high template sample, low template sample, two-person mixture, three-person mixture, the quant value, etc. (June 2, 2022 tr at 1025-1026, 1030). In general, the lower the amount of DNA or the lower the quant value, the more drop-out you would expect to see (June 2, 2022 tr at 1030). FST, like most semi-continuous probabilistic genotyping systems, did not take into account peak height because once you get into that low copy number range, peak heights are not as good an indication of the amount of DNA in the sample at that point (June 2, 2022 tr at 1030-1031). As such, OCME decided to use quant values instead of peak heights for FST (June 2, 2022 tr at 1031). This process of estimating the drop-in and drop-out rates took about a year and a half (June 2, 2022 tr at 1029).
FST was modeled after a program used in the UK by Dr. Gill called LoComatioN (June 2, 2022 tr at 1027). The first stage of OCME's FST developmental validation was to come up with the different drop-in and drop-out rates based on the empirically derived samples that were processed. Id. The second phase was actually building the software. Id. The third phase was the internal validation where OCME tested the software using known samples and mock casework samples to see if it was producing the expected results. Id. At the last phase, OCME ran a total of 480 amplifications of two-person and three-person mixtures, both high template and low template (June 2, 2022 tr at 1033-1034). The validation guidelines from SWGDAM stated that the internal validation process should include the studies encompassing a total of at least 50 samples, and OCME used more than 400 samples for FST validation (June 2, 2022 tr at 1037). The results showed that the likelihood ratio that was calculated was mirroring or were similar to what you were seeing with the analyst's qualitative assessment of that comparison (June 2, 2022 tr at 1041). For instance, for a true donor that was deemed to be included, the likelihood ratio was pretty high in the millions, whereas for a donor that was missing a lot of alleles and concluded to be excluded, there was a likelihood ratio well below one (June 2, 2022 tr at 1041-1044).
In casework, OCME does not run FST on exclusions but during the validation, they did run and calculate the likelihood ratio on all of the true donors that were interpreted and compared, whether included or excluded (June 2, 2022 tr at 1039).
As part of the FST validation, OCME also performed a false positive study or a non-contributor testing to see how good the system was at separating out those true contributors from the non-contributors (June 2, 2022 tr at 1044). For instance, if a low-level contributor shared alleles with a true contributor, the likelihood ratio can end up being higher than expected. Id. The non-contributor testing was done using a database of a little over 1,200 profiles of people that did not contribute to the mixtures and using them as "suspects" in the calculation, both in two-person and three-person comparisons (June 2, 2022 tr at 1044-1045). In total, there were more than half a million comparisons that were done with non-contributors to see how often those non-contributors ended up with a likelihood ratio below one and above one (June 2, 2022 tr at 1045). The results were that in 99.97 percent of the time, a non-contributor gave a likelihood ratio below one, which would be expected for somebody that did not contribute to that sample. Id . And in 0.03 percent of the time, which was 163 comparisons out of half a million, the likelihood ratio was greater than one (June 2, 2022 tr at 1047-1048). This result was submitted to the DNA Subcommittee (June 21, 2022 tr at 1306). In Dr. O'Connor’s opinion, if this was actual casework, a greater number of comparisons would have been excluded qualitatively by the analyst, and therefore, FST would not have even been run (June 2, 2022 tr at 1048). Dr. O'Connor testified that this result showed that if somebody is deemed as being excluded from a sample, their likelihood ratio would be below one, and it would be very rare that somebody who is a non-contributor to that mixture would have a likelihood ratio above one (June 2, 2022 tr at 1051). In addition, Dr. O'Connor testified that detecting a likelihood ratio above one is not unique to FST, and there are papers by some of the experts in the field that states that depending on how many profiles there are in the database that you are testing, you would expect to see a likelihood ratio that is above one for non-contributors (June 21, 2022 tr at 1305). It was brought out during re-cross examination that OCME never generated a slide that indicated what the false positive rate was for samples below 25 picograms of the FST validation study (June 21, 2022 tr at 1365).
Degradation is a term used to describe DNA that gets broken down over time or the fragments themselves that get broken apart (June 3, 2022 tr at 1054-1055). Once the cells leave the body, they begin to degrade or get broken down and the level of degradation is going to depend upon the environment that the sample or the cell is kept in (June 3, 2022 tr at 1055). When you look at a DNA result, there are ways to see if the sample itself is showing signs of degradation. Id. If you look at the electropherogram, what tends to happen if a sample is showing signs of degradation is that the peaks on the left side would be higher than the peaks on the right side, which is called a ski slope effect. Id. There is no way to definitely say a sample is degraded. Id. With low levels of DNA, you can see that similar ski slope effect, so it makes it seem like it is also degraded (June 3, 2022 tr at 1056). When FST was being validated, one experiment done was to create a second module of the program that included samples that were purposefully degraded using UV light and counting their drop-out rates and setting up the software to calculate the likelihood ratio. Id. This was compared to the likelihood ratio of the samples that were not degraded, and the results were that using this degraded module did not do any better of a job of separating the true contributors from the non-contributors than the regular module. Id. Therefore, the regular module was kept. Id. Furthermore, the samples that were run with that regular module showed the signs of degradation, which showed that the program was processing those samples correctly (June 3, 2022 tr at 1056-1057). The use of degraded samples in the validation was presented to the DNA Subcommittee and published in the validation summary (June 3, 2022 tr at 1057).
The FST report shows all the inputs to the program, such as the alleles of the comparison sample, the number of contributors, the amount of DNA that was amplified and whether it was deducible or not (June 3, 2022 tr at 1057-1058). These inputs allow the program to go ahead and use the correct drop-out rates and drop-in rates, and then it would calculate the likelihood ratio for the four major racial groups, which are African American, Hispanic, Caucasian and Asian (June 3, 2022 tr at 1058). Therefore, FST is generates four different likelihood ratios based on race. Id. Of the four, OCME will report the lowest value, to be conservative, since the analyst do not know the race of the individual that left the DNA (June 3, 2022 tr at 1059-1060). A likelihood ratio of one means that there is an equal support for the numerator scenario so that would be a no conclusion statement (June 3, 2022 tr at 1060). If the likelihood ratio is above one, that means it is showing more support for that numerator scenario, which usually includes the suspect as a contributor. Id. If the likelihood ratio is less than one, that means that it is showing more support for that denominator scenario, which usually does not include the suspect and is rather unknown, unrelated individuals. Id. In the FST report, a likelihood ratio of one would be no support, one to ten would be limited support, ten to a hundred would be moderate support, a hundred to a thousand would be strong support and anything above a thousand would get the strongest support (June 3, 2022 tr at 1062-1063).
After the first year of using FST, OCME calculated the statistics to see the overall likelihood ratio result that they were getting with this tool (June 3, 2022 tr at 1061). Out of the 511 samples, 36 percent gave a likelihood ratio less than one and 64 percent gave a likelihood ratio greater than one. Id. Out of the samples that gave a likelihood ratio greater than one, 43 percent were greater than a million (June 3, 2022 tr at 1062).
The fact that FST could not account for the hypothetical relatedness was something that was discussed in the validation (June 3, 2022 tr at 1064-1065). However, if there is a situation where the defense is claiming that a relative was a possible contributor and if that individual's DNA was sent to OCME, an analyst could do a comparison of that person's profile to the mixture and run FST accordingly (June 3, 2022 tr at 1064).
In 2012, Dr. Mitchell wrote a letter to Dr. Ballantyne, the chair of the DNA Subcommittee, outlining an error that they had identified in the validation material that was given over to the Subcommittee (June 3, 2022 tr at 1091-1092). However, the letter further stated that the error did not impact the validation or performance of FST in any way, as the corrected value was not used in any calculation by the program. Id.
Dr. Adelle Mitchell joined OCME in September of 2008 (June 13, 2022 tr at 1129). Dr. Mitchell and Dr. Caragine were in charge of OCME's FST project (June 3, 2022 tr at 1098). Part of Dr. Mitchell's background is in computer programming (June 3, 2022 tr at 1098-1099). OCME had also contracted a company to do the bulk of the computer programming for the software, and the programmers were on site at OCME during the development of the program around 2010 and 2011 (June 3, 2022 tr at 1099-1100). In order to test that the computer program was doing the calculations correctly, Dr. Mitchell did manual calculations here and there (June 3, 2022 tr at 1101).
On April 7, 2011, FST was officially rolled out for use on casework (June 3, 2022 tr at 1102-1103). On the next day, it was brought to Dr. Mitchell's attention that there was a case that showed a negative likelihood ratio, which is a mathematical impossibility (June 3, 2022 tr at 1103). As soon as they saw this, FST was taken offline. Id. Dr. Mitchell sent an email to Roni Amiel, an employee at OCME IT department, stating that for some amounts of DNA, FST was calculating negative drop-out rates, and Dr. Mitchell had asked everyone to stop using the program until this can be fixed (June 13, 2022 tr at 1121-1123). The email further asked if Pallavi Chiramana and Samir Iabbassen, employees at OCME IT department, could work on this immediately, and stated that they needed to put casework on hold and may have to recall some results that have already been sent out depending on the extent of the problem (June 13, 2022 tr at 1122-1123). It was determined that one of the programmers were making some unrelated changes to the program, some of them cosmetic, and inadvertently deleted a couple of words from a line of code, which caused incorrect drop-out rates to be associated to the calculation itself in a couple of the picogram ranges (June 3, 2022 tr at 1103-1104). This caused the calculations to be done incorrectly, and resulted in the negative likelihood ratio (June 3, 2022 tr at 1104). The program was reverted back to what was written after the validation. Id. In addition, to prevent a negative likelihood ratio, Dr. Mitchell made a change to the program, which they called the 0.97 cap. Id. If the frequencies of the alleles seen in any given locus in the evidence sample add up to 0.97 or above, then that locus would be given a likelihood ratio of one, making it inconclusive (June 3, 2022 tr at 1104-1105). The allele frequency sum issue did not surface during the FST validation studies (June 13, 2022 tr at 1155). Dr. O'Connor believed Dr. Mitchell chose 0.97 based on the theta correction that is applied during the calculations of the genotype frequencies that is done with the likelihood ration calculations (June 3, 2022 tr at 1105). A theta correction is a traditional population statistic applied to account for different population subgroups, which was introduced into the community in the 90s (June 3, 2022 tr at 1105-1006). During OCME's presentations of FST to the DNA Subcommittee, it was mentioned that OCME may be using a 3 percent theta correction adjustment (June 3, 2022 tr at 1106). Dr. O'Connor testified that this is a traditional theta correction that is used in all of OCME's forensic statistics, not just FST. Id.
After 0.97 cap was added, OCME conducted a performance check to ensure the program was still operating the way it was supposed to (June 3, 2022 tr at 1107-1108). During the performance check, some samples that were previously evaluated during the validation were reevaluated or rerun to see what likelihood ratios the program was producing to ensure that it was doing the calculations correctly (June 3, 2022 tr at 1109). In addition, they conducted a non-contributor test of 1,246 non-contributor profiles to ensure the program was calculating correctly after the changes were made. Id. In Dr. O'Connor’s opinion, the performance check demonstrated that the software was performing properly after the fixes were made. Id. It was brought out during cross-examination that no bulk run calculations were done in the false positive study after the job switch defect was discovered (June 16, 2022 tr at 1210-1211).
Dr. O'Connor was aware that certain organizations had stated that these modifications made to the software were material alterations to FST (June 3, 2022 tr at 1109-1110). SWGDAM's Validation Guidelines for DNA Analysis Methods states that, "a material modification is an alteration of an existing analytical procedure that may have a consequential effect on analytical results A material modification shall be evaluated by comparing the results from the original procedure to the results of the modified procedure to ensure concordance. The laboratory should evaluate the appropriate sample number, sample type and the studies necessary to demonstrate this." (June 3, 2022 tr at 1110). In Dr. O'Connor’s opinion, the fixes made to the FST source code met the SWGDAM's definition of material modification (June 3, 2022 tr at 1111). Therefore, it is Dr. O'Connor’s opinion that an entirely new validation was not required, and it was determined that a performance check was deemed necessary in order to check that the changes that were made was producing concordant results after the changes were made. Id. In September of 2017, certain defender services in New York State made an official complaint to the New York State Inspector General's Office regarding FST. Id. In general, the complaint was that changes were made to the software and OCME was using the software without a full validation on the changes and without presenting the changes to the DNA Subcommittee. Id. The complaint was then referred to D.C.J.S. and the Commission who then referred it to the DNA Subcommittee to look into the allegations (June 3, 2022 tr at 1112). In response to the complaint, OCME sent a letter to the D.C.J.S. Id. OCME then supplied to the DNA Subcommittee the validation, including the performance checks of the 0.97 issue (June 3, 2022 tr at 1113). Subsequently, the DNA Subcommittee concluded that there were no merits to the complaints. Id. The DNA Subcommittee did not take FST offline. Id. The majority of the DNA Subcommittee members at the time were different from 2011 when FST was approved (June 21, 2022 tr at 1314). The DNA Subcommittee did not make a conclusion as to whether the 0.97 cap was a material alteration of the software or not. Id.
In Dr. O'Connor’s expert opinion, probabilistic genotyping software programs that perform DNA mixtures and generate likelihood ratios in forensic casework have been in use for decades now and are certainly generally accepted within the relevant scientific community. Id. In addition, in Dr. O'Connor’s opinion, the methods employed by FST specifically are reliable and generally accepted within the forensic science community (June 3, 2022 tr at 1113-1114). On cross-examination, Dr. O'Connor testified that by probabilistic genotyping he means inferring different genotypes from the data and assigning a probability to that, which can include drop-in and drop-out, among other things (June 13, 2022 tr at 1143). Dr. O'Connor also testified that when he said these methodologies have been used for decades, he meant that these methodologies, especially likelihood ratios, have been used in different industries as well, not just in court in a forensic DNA setting, to be relied upon (June 13, 2022 tr at 1146).
Nathaniel Adams is a computer scientist, and Dr. O'Connor was aware that he had been used by the defense as an expert in at least one other admissibility hearing (June 16, 2022 tr at 1211-1212). Dr. O'Connor was aware that Mr. Adams reviewed the FST source code pursuant to a court order in United States v. Johnson in 2016 (June 16, 2022 tr at 1212). Dr. O'Connor was also aware that Mr. Adams had stated that once he had reviewed the FST source code, he had detected the 0.97 cap in the source code. Id. Once the changes were made to the program, including the 0.97 cap, the performance check conducted was documented and made available for anybody at OCME (June 16, 2022 tr at 1213). Dr. O'Connor was aware that Mr. Adams had stated that when changes are made to a program, the changes should be documented and dealt with as different versions of the software (June 16, 2022 tr at 1216). While Dr. O'Connor agreed that independent review of source code or independent validation of a software may be valuable, it was his opinion that internal validation by a practitioner using the software would give more insight as to whether the program was working or not (June 16, 2022 tr at 1216-1217). In addition, such independent review or validation was not required of OCME based on their accreditation standards (June 16, 2022 tr at 1217). It was brought out during re-direct that Mr. Adams does not have a graduate degree and he has not worked in a forensic DNA lab (June 21, 2022 tr at 1344).
Dr. O'Connor was made aware of an article/manuscript by Dr. Jeanna Matthews from Clarkson University titled, "The Right to Confront Your Accusers: Opening the Black Box of Forensic DNA Software," while preparing for this hearing (June 16, 2022 tr at 1217). One of the co-authors of this manuscript is Dr. Dan Krane of Wright State University, who is also the owner of Forensic Bioinformatic Services in Fairborn, Ohio, a DNA expert consulting firm (June 16, 2022 tr at 1220-1221). Defense counsel on this case, Clinton Hughes, Esq. was also one of the authors of this paper as well (June 16, 2022 tr at 1221). According to the article, Dr. Matthews and her team downloaded FST from GitHub, which is an online repository for software and source codes (June 16, 2022 tr at 1218-1219). Dr. O'Connor was aware that FST was put on GitHub sometime around the time of the case of United States v. Johnson (June 16, 2022 tr at 1219, 1222). According to the manuscript, Dr. Matthews and her team took data from the false positive study and ran it, and the manuscript does not mention anything about the bulk calculator. Id. It was stated in this paper that, "[s]pecifically, we find that 104 of the 439 samples (23.7%) triggered the undisclosed data-dropping behavior and that the change skewed results toward false inclusion for individuals whose DNA was not present in an evidence sample" (June 16, 2022 tr at 1224). Dr. O'Connor testified that the result of this paper is similar to what OCME had seen during their performance check in that the results, depending on the specific case, could lead to a slightly higher likelihood ratio or a slightly lower likelihood ratio depending on the alleles of the person of interest that was being compared (June 16, 2022 tr at 1225). In addition, Dr. O'Connor testified that although the paper seemed to conclude that the likelihood ratios are skewed upwards, if you look at the figures in the paper, you would see that the vast majority of the non-contributors were still well below a likelihood ratio of one, which indicated that they were still favoring exclusion, which was what would be expected. Id. Therefore, while Dr. O'Connor does not know in what context this paper was authored, which was not a peer-reviewed article, but regardless, the results of this article would not have caused OCME to do follow-up investigation or a rebuttal (June 16, 2022 tr at 1225-1226).
Around 2014, OCME shared FST software with Dr. Mitchell Holland who is a professor at Penn State University (June 16, 2022 tr at 1226-1227). However, as far as Dr. O'Connor was aware, he was not able to install the program. Id. Dr. O'Connor was also aware that FST software was shared with Dr. Michael Coble when he was at NIST, but he also was not able to install the program (June 16, 2022 tr at 1226). Dr. Holland had testified on behalf of the prosecution in People v. Collins (June 16, 2022 tr at 1227). Dr. Holland has not done any studies about the use of FST. Id.
During the Collins hearing, it came to light that Dr. Mitchell had simulated locus genotypes at two loci (June 16, 2022 tr at 1231). Dr. O'Connor testified that OCME had a reference database at 13 of the 15 identifier loci and rather than going back and retesting all those samples, some of which they no longer had because the reference database had been created more than a decade prior, they used allele frequencies supplied by the manufacturer of the testing kit for two of those loci. Id. Based on those allele frequencies supplied by the manufacturer, Dr. Mitchell then simulated genotypes in order to do some of the testing. Id. Dr. O'Connor testified that simulation of genotypes is something that is commonly done for testing in validation studies (June 16, 2022 tr at 1232). Dr. O'Connor further testified that OCME simulated most of its non-contributor database for the STRmix validations, and in fact, the developers of the STRmix software recommend labs that are validating to simulate non-contributor databases (June 21, 2022 tr at 1319). Dr. O'Connor was aware that Dr. Chakraborty, a population geneticist, had expressed opposition to combining the human genotypes with the simulated genotypes for those two loci (June 16, 2022 tr at 1232). Dr. O'Connor was aware that Dr. Chakraborty, who was a member of the DNA Subcommittee when FST was presented, had later stated that he had changed his mind about approving the use of FST, but at that point, he was no longer on the Subcommittee (June 16, 2022 tr at 1232-1233). Dr. O'Connor does not remember whether Dr. Chakraborty's concerns regarding FST was that it did not take into account degradation or that the drop-our rates were based on pristine lab samples that did not reflect real casework samples (June 16, 2022 tr at 1233). However, Dr. O'Connor testified that in his opinion, they do reflect real casework samples seeing how during the validation, the samples that were used to determine the drop-out rates were looked at and were identified by looking for characteristics of degradation, which would be something that you would see during casework, and some of them showed those characteristics (June 16, 2022 tr at 1233-1234). This gave OCME the confidence that FST could give reasonable likelihood ratios for all types of samples, including those that showed signs of degradation (June 16, 2022 tr at 1234).
In OCME's validation paper for FST, Dr. Mitchell and R. Caragine indicated that 85 contributors were used to create the false positive study sample (June 16, 2022 tr at 1235). Dr. O'Connor was not aware that Dr. Shapiro did a count, and the number was around 61. Id. Dr. O'Connor was aware that during the Collins hearing, it was brought out that there was no documentation to support the ratio makeup of the individuals that were used in the study (June 16, 2022 tr at 1236).
Dr. O'Connor was aware that Dr. Bruce Budowle worked at FBI for over a decade and had a large part in creating the CODIS database, the national DNA database (June 16, 2022 tr at 1238-1239). Dr. Budowle then became the executive director at the Institute of Applied Genetics at the University of North Texas Health Science Center (June 16, 2022 tr at 1239). Dr. O'Connor testified that Dr. Budowle is a pioneer in the forensic DNA analysis field and has had a large influence in the procedures and tests that are used today. Id. Dr. O'Connor was aware that Dr. Budowle opposed the admissibility of FST for criminal court use and had testified for the defense in the Collins case. Id . Dr. Budowle had testified that the FST was unique in the way it determines drop-in and drop-out. Id. Dr. O'Connor was not certain that Dr. Budowle had testified that OCME never formally tested and never published studies regarding the theory that quantification could reliably determine drop-in and drop-out (June 16, 2022 tr at 1239-1240). However, Dr. O'Connor stated that OCME's validation paper mentioned quant as a function of drop-in and drop-out, so it was included in the published paper (June 16, 2022 tr at 1240). Dr. O'Connor was aware that Dr. Budowle had critiqued validation studies, including OCME's FST and LCN, about the use of pristine DNA samples to calculate drop-in and drop-out versus stochastic effects (June 16, 2022 tr at 1242).
Dr. Angela van Daal was one of the first to introduce PCR DNA testing into a court of law, and she was an assistant chief scientist at the South Australian Forensic Science Center (June 16, 2022 tr at 1248). She is also an accredited laboratory inspector for ASCLD-LAB. Id. Dr. O'Connor testified that Dr. van Daal is a respected scientist in the field. Id. Dr. van Daal had testified at a prior hearing in another case that FST was not generally accepted in the relevant scientific community (June 16, 2022 tr at 1249).
Dr. Heather Coyle is a professor of forensic science at the University of New Haven and she has a consulting company called Identacode (June 16, 2022 tr at 1249-1250). Dr. Coyle had worked at the state laboratory in Connecticut in the DNA lab, mostly in mitochondrial DNA (June 16, 2022 tr at 1250). Dr. O'Connor testified that Dr. Coyle is respected in the field. Id. Dr. Coyle had testified at a prior hearing that FST was not generally accepted in the relevant scientific community. Id.
Dr. Eli Shapiro received his PhD in biology from Yale University and was the assistant director managing the training group of OCME (June 16, 2022 tr at 1263-1264). Dr. O'Connor testified that Dr. Shapiro is part of the relevant scientific community (June 16, 2022 tr at 1264-1265). Dr. Shapiro had testified at a prior hearing criticizing FST for how the drop-out rates were estimated and for the use of pristine high quality buccal exemplar swabs to calculate the drop-out rates (June 16, 2022 tr at 1265). In addition, Dr. Shapiro testified that OCME arbitrarily lowered the drop-out rates below the empirically observed rates. Id. Dr. Shapiro had testified that in his opinion, FST is not generally accepted in the relevant scientific community (June 16, 2022 tr at 1266). It was brought out during re-direct examination that Dr. Shapiro has no post-doctoral work in statistics, no post-doctoral work in forensic analysis and has no background or work experience in the area of population genetics (June 21, 2022 tr at 1342). As far as Dr. O'Connor was aware of, Dr. Shapiro did not use FST on casework while he was at OCME (June 21, 2022 tr at 1343).
Dr. Dan Krane had stated that OCME and the former Austin Police Department were the only laboratories in the world that attempted to correlate drop-out with DNA quantity at the quantitation phase (June 16, 2022 tr at 1268-1260). Dr. O'Connor testified that FST does rely on the drop-our rates that were determined and those were based on the quant value (June 16, 2022 tr at 1269). Dr. Krane also criticized the lack of documentation in studies between the quantity of DNA and the likelihood ratio produced by FST. Id. However, Dr. O'Connor testified that the function of FST is that it does rely on the drop-out rates that were determined based on the template amount of DNA, which was included in the validation and also in the published validation summary. Id.
Dr. Kirk Lohmueller is a co-developer of LabRetriever, a semi-continues probabilistic genotyping program (June 16, 2022 tr at 1271). Dr. O'Connor believed that LabRetriever is based on David Balding's LikeLTD source code and is very similar to the calculations done by FST. Id. Dr. O'Connor could not recall whether Dr. Lohmueller had testified against the admission of FST in another case (June 16, 2022 tr at 1273).
Dr. Alan Jamieson from Scotland had testified at a prior hearing and stated that FST is not generally accepted in the relevant scientific community. Id.
Dr. O'Connor was aware that one of the criticisms by experts about FST was the fact that the validation did not take allele sharing into account (June 16, 2022 tr at 1276-1277). Dr. O'Connor testified that each of the different loci have a finite number of alleles so it is only natural that people are bound to share alleles, therefore, when the program was validated, that certainly was an aspect of it (June 16, 2022 tr at 1277). Another criticism of FST is that it does not calculate any degree of relatedness when creating the likelihood ratio (June 16, 2022 tr at 1278). Dr. O'Connor testified that the limitation of the program is clearly stated since OCME's report uses the term one or two or three "unknown, unrelated" individuals. Id. In addition, the fact that OCME did not account for hypothetical relatives in the program is discussed in the validation and in the published validation summary (June 21, 2022 tr at 1320). However, if the relationship is known and OCME is able to get a profile of the related individual, FST can either use them as an assumed contributor or calculate their own independent likelihood ratio, comparing them to the mixture (June 16, 2022 tr at 1278). Dr. O'Connor was aware the David Balding did the preliminary math to account for relatedness about a quarter century ago (June 16, 2022 tr at 1279). Dr. O'Connor does not believe there should have been any extra warning on the lab report that FST does not account for relatedness (June 21, 2022 tr at 1377).
President's Counsel of Advisors on Science and Technology ("PCAST") stated that, "[w]hen further studies are published, it will likely be possible to extend the range in which scientific validity has been established to include more challenging samples. As noted above, such studies should be performed by or should include independent research groups not connected with the developers of the methods and with no stake in the outcome" (June 16, 2022 tr at 1292). Dr. O'Connor testified that while such independent research is certainly useful, he does not think that it is needed in order to establish validity of a program. Id. Dr. O'Connor further testified that PCAST is an advisory body and not an accrediting body, and as such, their recommendations are not binding (June 21, 2022 tr at 1323-1324). The PCAST report covered different disciplines in addition to DNA analysis (June 21, 2022 tr at 1324). After the PCAST report came out, a number of groups within the different disciplines issued statements disagreeing with the conclusions, including within the forensic DNA community such as the FBI, the Department of Justice, the National District Attorneys’ Association, the American Academy of Forensic Science and Dr. Bruce Budowle (June 21, 2022 tr at 1324, 1331-1332). It was pointed out that none of the authors of the PCAST report were forensic scientists (June 21, 2022 tr at1347). However, it was brought out during re-cross that PCAST did consult with experts including John Butler, Kareem Belt (a former analyst at OCME), John Buckleton, Bruce Budowle, Itiel Dror, Ian Evett, Gleen Langenburg, Catherine Grgicak and Norah Rudin (June 21, 2022 tr at1348-1350).
In an article written in 2020, Dr. Budowle stated that an error in the source code in and of itself does not make a software unreliable, and most, if not all, software is bound to contain some sort of source code error (June 21, 2022 tr at 1334). In that article, Dr. Budowle stated that the real question is whether there are mechanisms in place to detect a miscode, what the impact of the miscode is and whether the identified miscodes were corrected. Id. Dr. O'Connor testified that in his opinion, this is what OCME did when they identified an error in the source code (June 21, 2022 tr at 1335). Dr. Budowle also wrote that, "[e]mploying methods like Markov chain Monte Carlo that are routinely used in computational biology, physics, engineering, weather prediction and the stock market, probabilistic genotyping software grades proposed profiles on how closely they resemble or can explain an observed DNA mixture profile" (June 21, 2022 tr at 1361-1362). However, Dr. O'Connor testified that no semi-continuous probabilistic genotyping software, including FST, uses Markov chain Monte Carlo (June 21, 2022 tr at 1362). Dr. O'Connor agreed with Dr. Budowle that in order to understand the limitations of a probabilistic genotyping software, the user should perform validation studies to reduce chances of interpreting software output that is not supportable (June 21, 2022 tr at 1362-1363). Dr. O'Connor also agreed with Dr. Budowle that it is important to understand that the trained user is an integral part of the interpretation of DNA evidence. Id. Dr. O'Connor testified that the analyst is the one that is interpreting the results and making the conclusion, and it is not FST making the conclusion. Id. Dr. O'Connor further testified that once a trained analyst concludes that the sample is included as a possible contributor, FST is run to give weight to that possible inclusion, and even with a number of five billion, that is not saying that that person is definitely included in that mixture. Id.
(2) Natasha Harvin-Locklear, Esq.
Natasha Harvin-Locklear, Esq. testified that she works at the Division of Criminal Justice Services ("DCJS") in the Office of Legal Services (March 31, 2022 tr at 473-474). DCJS is a state agency that provides support and resources to the criminal justice community (March 31, 2022 tr at 474). Ms. Harvin-Locklear is currently the Associate Counsel at DCJS (March 31, 2022 tr at 475). She is also special counsel to various boards within the agency including the Commission of Forensic Science and the DNA Subcommittee. Id. As it pertains to the Commission of Forensic Science and the DNA Subcommittee, her duties are to make sure that both the Commission and the Subcommittee adhere to relevant statutes and regulations and abide to the Open Meetings Law, and she also provides research assistance. Id.
New York State Forensic Science Commission was established in 1994 and is a 14-member public body that is responsible for setting the standards for the public laboratories in the New York State (March 31, 2022 tr at 476-477). The Commission was established under Article 49-B of the Executive Law Sections 995-a and 995-b. Id. The commissioner of the DCJS is the chair of the Forensic Science Commission (March 31, 2022 tr at 478). The commissioner of the Department of Health is an ex officio member. Id . The remaining 12 members are appointed by the governor and are in certain categories based on recommendations. Id. The Commission members include the chair of the New York State Crime Laboratory Advisory Committee, the director of Forensic Laboratory, the director of the Office of Forensic Services, two scientists with experience in the areas of laboratory standards or quality assurance regulation and monitoring, a representative of a law enforcement agency, a representative of the public criminal defense bar, two members appointed based upon the recommendation of the Legislature and an attorney or a judge with background in privacy issues and biomedical ethics. Id. The Commission members typically have term limits, but they can be re-appointed (March 31, 2022 tr at 481).
The main duty of the Forensic Science Commission is to set the accreditation standards for the public laboratories in New York State and approve any forensic methodologies that are brought before the Commission (March 31, 2022 tr at 479). The Commission members meet quarterly, and the meetings are open to the public. Id.
The DNA Subcommittee was created in 1995 and there are seven members (March 31, 2022 tr at 481). The chair of the Subcommittee is appointed by the chair of the Commission, and the rest of the members are appointed by the chair of the DNA Subcommittee (March 31, 2022 tr at 481-482). It is mandated that the Subcommittee members include individuals in the disciplines of molecular biology, population genetics, laboratory standards and quality assurance regulation and monitoring and forensic science (March 31, 2022 tr at 482). These individuals must be recognized in the scientific community based on the categories, and they are from all over the world. Id. DNA Subcommittee members are not paid for their work. Id. The DNA Subcommittee is responsible for reviewing and assessing all DNA methodologies that are brought before it and they set the standards for citations for DNA labs (March 31, 2022 tr at 483). The Subcommittee makes binding recommendations to the Commission, which address minimum scientific standards that are used when conducting DNA analysis. Id. The composition of the DNA Subcommittee members has changed over the years between 2005 and 2017 (March 31, 2022 tr at 484-485).
In New York State, there are 20 public laboratories, eight of which are public DNA laboratories, including OCME (March 31, 2022 tr at 486). In New York State, DNA laboratories are accredited by ANAB, which is an ANC national accreditation board, the DNA Subcommittee and the Commission of Forensic Science. Id. There are only four other states that have a mandatory DNA lab accreditation process (March 31, 2022 tr at 487). The regulations that govern the accreditation process for New York State forensic science laboratories are codified in 9 New York State Code of Rules and Regulations ("NYCRR") in Part 6190 through 6193. Id. A forensic laboratory seeking accreditation should provide to ANAB or ABFT the supporting documents, which after initial review should be forwarded to the DNA Subcommittee for its review for a binding recommendation regarding its accreditation to perform DNA testing (March 31, 2022 tr at 490-491). The DNA Subcommittee shall forward its binding recommendation to the Commission, who shall make a final determination as to whether New York State accreditation in forensic DNA testing should be granted (March 31, 2022 tr at 491).
If a DNA laboratory in New York State wants to bring a new forensic methodology online, it needs to obtain the approval of the DNA Subcommittee first, and the process is outlined in Executive Law 995-b(13) (April 6, 2022 tr at 496-497). The DNA Subcommittee shall assess and evaluate the proposed methodology and make reports and recommendations to the Commission (April 6, 2022 tr at 497). The DNA Subcommittee shall make binding recommendations for adoption by the Commission addressing minimum scientific standards to be utilized in conducting forensic DNA analysis including but not limited to, examination of specimens, population studies and methods employed to determine probabilities and interpretation of test results. Id. The lab would submit documents including the validation studies to the Subcommittee and answer questions the Subcommittee may have by doing a presentation or answering questions during the Subcommittee meeting (April 6, 2022 tr at 498-499). Each member of the DNA Subcommittee and the Forensic Science Commission gets their own vote (April 6, 2022 tr at 500).
In Ms. Harvin-Locklear's experience, she has seen the Forensic Science Commission or the DNA Subcommittee delay action on a technique, request more information from a lab before making a decision or limit the use of the proposed technology (April 6, 2022 tr at 500-501). She's also seen in at least one occasion where the Commission had questions regarding the Subcommittee's binding recommendation and posed questions to the Subcommittee for further review (April 6, 2022 tr at 501). In one such case, the Commission was not satisfied with the Subcommittee's recommendation and did not accept it, and ultimately that methodology was accepted but with limitations (April 6, 2022 tr at 501-502).
(3) Dr. John Buckleton
Dr. John Buckleton is currently the Principal Scientist at New Zealand Government Forensic Science Service and this position entails DNA interpretation (August 15, 2022 tr at 1394). Dr. Buckleton received his undergraduate degree and PhD from Auckland University in chemistry (August 15, 2022 tr at 1395). Dr. Buckleton also has a British commonwealth degree called a DSc, which comes after PhD in forensic science. Id. Dr. Buckleton has worked in forensic science since 1983 and has done about 2,000 cases as the actual analyst. Id. Dr. Buckleton has worked for the United Kingdom Forensic Science Service ("FSS"), which was a government forensic service in the UK (August 15, 2022 tr at 1395-1396). FSS was considered to be the world leader in DNA testing, and it was the first laboratory in the world to use DNA in case work (August 15, 2022 tr at 1396). Dr. Buckleton worked in the United States in 1995 at the University of North Carolina (August 15, 2022 tr at 1396-1397). Subsequently, from 2014 to 2020, Dr. Buckleton worked for the New Zealand government but was stationed in the United States (August 15, 2022 tr at 1397). For the first two years in the United States, Dr. Buckleton worked with the National Institute of Science and Technology, and the rest of the time he worked in Maryland and then in California working on the development of probabilistic genotyping software. Id. During this time, the Unites States was transitioning from previous methods of DNA interpretation to probabilistic genotyping, and Dr. Buckleton was deployed to teach, research and make court presentations across the United States. Id. For the past 20 years, Dr. Buckleton's professional experience has been dominantly in DNA and he has been working in DNA analysis since its inception in case work, which was around 1988 (August 15, 2022 tr at 1397-1398).
During his career in New Zealand and the UK, Dr. Buckleton has worked on LCN testing on criminal case work, not doing the actual analysis, but doing the mathematics (August 15, 2022 tr at 1398). In addition, Dr. Buckleton has assisted in writing a validation paper for LCN for the ESR. Id. Dr. Buckleton has published a little over 230 peer-viewed journals, which included the topics of LCN methodology and probabilistic genotyping software (August 15, 2022 tr at 1398-1399). Dr. Buckleton has written articles specifically regarding FST used by OCME (August 15, 2022 tr at 1399). Dr. Buckleton is one of the creators of STRmix software, which he began developing in May of 2011. Id. This included developing the algorithm and validating the software. (August 15, 2022 tr at 1400). As for the computer software for STRmix, Dr. Buckleton worked with Dr. Duncan Taylor and Dr. Jo-Anne Bright. Id. Dr. Buckleton still works with laboratories that decide to use STRmix on criminal case work (August 15, 2022 tr at 1405).
Dr. Buckleton was appointed to the International Society of Forensic Geneticists’ DNA commission on mixture interpretation in 2006 and to the validation of probabilistic software in 2014 (August 15, 2022 tr at 1401-1402). In 2019, the STRmix team was awarded the top science award in New Zealand from the prime minister (August 15, 2022 tr at 1402). Dr. Buckleton is a member of the Royal Society of New Zealand and was a member of the International Society of Forensic Geneticists and the American Academy of Forensic Sciences (August 15, 2022 tr at 1402-1403). Around 2016, Texas had an issue with DNA interpretation, and Dr. Buckleton was appointed to a DNA advisory committee for the Texas Forensic Science Commission (August 15, 2022 tr at 1403). In addition, Dr. Buckleton was a member of SWGDAM from around 2012 to 2016. Id.
Dr. Buckleton has testified a few hundred times in total, approximately 20 of which were in the U.S. on the subject of probabilistic genotyping (August 15, 2022 tr at 1405-1406). Dr. Buckleton has been qualified both in the U.S. and abroad to testify as an expert on LCN methodology (August 15, 2022 tr at 1406).
Dr. Buckleton has not operated FST, which is a semi-continuous tool (August 15, 2022 tr at 1408). STRmix is a fully continuous probabilistic genotyping tool. Id. However, Dr. Buckleton has provided training in the U.S. on semi-continuous tools. Id. Dr. Buckleton was one of the developers of semi-continuous drop models (August 15, 2022 tr at 1408-1409). However, he has not done software development for any of the semi-continuous drop models (August 15, 2022 tr at 1409).
Dr. Buckleton was qualified as an expert in the areas of LCN methodology and DNA testing as well as probabilistic genotyping tools including software development and coding.
Dr. Buckleton testified that in 1999, the UK's FSS developed LCN as a methodology for casework, and he was a part of that development in data analysis (August 15, 2022 tr at 1415). In 2000, FSS started using LCN methodology on criminal case work (August 15, 2022 tr at 1415-1416). This was the first time LCN was used in criminal case work (August 16, 2022 tr at 1484). In the case of Sean Hoey, in the UK, the DNA evidence was enhanced by UK's LCN technology, and the court did not allow the DNA evidence to be admitted (August 15, 2022 tr at 1416-1417). The judge in that case stated that the LCN required more work and that it had not met the expected standard (August 15, 2022 tr at 1417). Subsequently, UK's prosecution service suspended that 34-cycle LCN technology for three weeks and then reinstated it. Id. After that, the Caddy report came out and LCN was used until 2012 when FSS closed. Id. Dr. Buckleton testified that the FSS closed because of government policy and privatization, and not because of the Hoey case or LCN DNA testing (August 15, 2022 tr at 1418-1419). Dr. Buckleton testified that he knows Dr. Dan Krane and, in his opinion, Dr. Krane had nothing to do with the eventual closing of FSS. Id. Even after FSS closed, New Zealand, the Netherlands and Australia continued to use LCN methodology on casework (August 15, 2022 tr at 1420).
Sean Hoey was indicted for the construction of explosive devices and for a number of explosions, including the Omagh bombing that had killed approximately 28 people (August 15, 2022 tr at 1416). Mr. Hoey was acquitted. Id.
Dr. Buckleton testified that the primary advantage of the LCN methodology is the increased sensitivity, which means you can get results from samples you may not otherwise. Id. FSS patented the 34-cycle methodology, and Dr. Buckleton, Dr. Peter Gill and Dr. Jonathan Whitaker are the three named on the patent as the inventors (August 15, 2022 tr at 1420-1421).
In Dr. Buckleton's opinion, when a laboratory adopts LCN methodology, it should conduct validation studies and adopt three additional protocols (August 15, 2022 tr at 1421). The three protocols are increasing the cycle number, replicating and adjusting the interpretation guidelines and adopting protocols for a clean facility. Id. Furthermore, if a laboratory is adopting LCN methodology, special training is needed (August 15, 2022 tr at 1422). Dr. Buckleton testified that most laboratories do not have the resources to invest in enhanced sensitivity methods, and therefore, it tends to be used in specialist facilities and fewer laboratories (August 15, 2022 tr at 1423).
While OCME validated LCN technique using 31 cycles, the UK, New Zealand and the Netherlands used 34 cycles. Id. Dr. Buckleton testified that there is nothing special about 28 cycles and the cycle number varies across PCR replication in forensic science (August 15, 2022 tr at 1423). There are user guides for testing kits that provide protocols for cycles that go beyond 28 cycles. Id.
In 2009, Dr. Buckleton published an article in the Forensic Science International Genetics in response to the Hoey ruling (August 15, 2022 tr at1427-1428). In the Hoey ruling, the court had stated that there were only two published papers supporting the LCN technique, but Dr. Buckleton stated that the court simply counted the number of papers as opposed to assessing the value of the information (August 15, 2022 tr at 1429). Dr. Buckleton further stated that journal editors are understandably reluctant to publish validation papers if the technique has been published once, and subsequently another laboratory repeated the work and obtained the same or similar results. Id. In that case, the subsequent papers will be rightly abandoned as not novel, and hence, not worthy of publication. Id. Dr. Buckleton testified that he had encountered this during his work of both LCN and probabilistic genotyping software (August 15, 2022 tr at 1429-1430). However, it was brought out during cross examination that OCME had their validation paper for low copy number published in 2009, and Dr. Buckleton's lab also published their validation for the New Zealand lab in 2010 (August 16, 2022 tr at 1506-1507). Dr. Buckleton testified that their paper was initially rejected by the publisher saying they do not publish validations, but ultimately the paper was published. Id.
Dr. Buckleton testified that Dr. Budowle and Dr. van Daal had written articles criticizing LCN, emphasizing the deleterious effects of the increased stochastic variants (August 15, 2022 tr at 1434-1435). In response, Dr. Buckleton and Dr. Gill had argued that stochastic effects simply get bigger as sensitivity increases, but by probabilistically modeling these effects, one can ensure that one draws reliable inferences from the data (August 15, 2022 tr at 1435). In hindsight, Dr. Buckleton felt that Dr. Budowle had not come to grips with the elegance of the interpretation strategy and the highly beneficial aspect it brought to the interpretation of these types of data. Id.
Dr. Buckleton testified that in connection to quantification, OCME subdivides their classification system into unresolvable and resolvable mixtures (August 16, 2022 tr at 1511). Resolvable mixtures mean the mixture has separate components with different quantities, i.e., one contributor may be LCN and another may be classed as convention (August 16, 2022 tr at 1510-1511). Dr. Buckleton did not believe OCME dealt with the situation where a sample was degraded and the high molecular weight loci may be more prevalent than low molecular weight loci, i.e., within one contributor, some loci may be LCN and others may be classed as convention. Id. Dr. Buckleton testified that the correct approach would be MixKin, rather than predicting drop-out, where you integrate across the distribution, which was not introduced to the community until the 2010s. Id. Dr. Buckleton believed Dr. Budowle's concerns were the assignment of the probability of drop-out and drop-in, especially by use of a quantitation value (August 16, 2022 tr at 1513). Dr. Buckleton believed Dr. Budowle had a problem with the implementations of the drop model, not the principle itself (August 16, 2022 tr at 1512-1513).
In Dr. Buckleton's opinion, some laboratories have phased out of the use of LCN methodology because it has been superseded by the increased sensitivity of modern multiplexes (August 15, 2022 tr at 1437). Multiplexes mean you are amplifying many loci at once, which is more modern and more sensitive (August 15, 2022 tr at 1439). Dr. Buckleton testified that GlobalFiler is the most commonly used multiplexes in the United States and it has an increased sensitivity all by itself without any enhanced sensitivity effort, and as such, there has been less of a need for LCN (August 15, 2022 tr at 1437-1438).
GlobalFiler is not LCN but it is just a better multiplex that can enhance low template samples without increasing its cycle number (August 15, 2022 tr at 1438).
In Dr. Buckleton's opinion, LCN DNA testing methods are accepted within the relevant scientific community (August 15, 2022 tr at 1438). In addition, Dr. Buckleton considered LCN DNA testing reliable for use in criminal casework. Id.
In Dr. Buckleton's opinion, OCME's FST software was based on sound principles, and other genotyping software programs share the same general principles as FST (August 15, 2022 tr at 1440). FST is a semi-continuous model (August 15, 2022 tr at 1441). A continuous model uses peak height directly, whereas semi-continuous model does not. Id. Dr. Buckleton testified that the FBI recently validated a new semi-continuous probabilistic genotyping software for work on their criminal casework. Id. Dr. Buckleton testified that one of the criticisms of FST is regarding the drop-out rates (August 15, 2022 tr at 1443). All semi-continuous applications, and in fact all continuous ones as well, utilize a probability of allele drop-out. Id. Dr. Buckleton stated that drop-out rate for FST is set by the quantification value, whereas the preferred method is to integrate it out. Id. However, even though quantification value is believed to be inferior to peak height, it is not appalling or bad in any way (August 15, 2022 tr at 1444). Dr. Buckleton testified that the method of utilizing quant used by OCME is the single most sophisticated use of quant he has seen (August 15, 2022 tr at 1444-1445). In addition, SWGDAM's 2010 guidelines and John Butler's 2014 book mentioned that quant value is a viable method (August 15, 2022 tr at 1445).
Around 2018, Dr. Buckleton recreated or remodeled FST with Julia Gasston, an honor student in mathematics who was writing a thesis on the topic of the effect of the locus dropping function, and they were quantifying that effect (August 15, 2022 tr at 1446-1447). Dr. Buckleton testified that he had reviewed part of the FST source code and that the 0.97 function is announced in the code (August 15, 2022 tr at 1447). However, the code was not released and it was not mentioned in the original FST publication. Id. Dr. Buckleton testified that based on his conversation with Dr. Adele Mitchell, he had learned that it was dropped out during the editing of the final publication. Id. Dr. Buckleton testified that it is certainly plausible that it was inadvertently dropped during editing of the publication since material are removed during editing to make the paper more readable. Id. In Dr. Buckleton's opinion, the 0.97 function altered things roughly equally for the prosecution and for the defense (August 15, 2022 tr at 1448).
In 2019, Dr. Buckleton co-authored an article ("Gasston article") in a peer-reviewed journal on the subject of FST (August 15, 2022 tr at 1448-1449). The co-authors included Dr. James Curran, the head of the department of statistics at the University of Aukland, and Dr. Jo-Anne Bright, the senior science leader in the STRmix unit at Aukland, New Zealand (August 15, 2022 tr at 1449). It was Dr. Buckleton's understanding that the 0.97 function was added because of a negative likelihood ratio that occurred in casework (August 15, 2022 tr at 1452). However, Dr. Buckleton did not think the allele frequencies are the cause of the negative likelihood ratio at all and did not think the 0.97 function fixed it. Id. He thought they fixed it consequentially with the other changes they had made. Id. The article gave an estimate of range of areas that the 0.97 function might be triggered. Id.
Subsequently, Dr. Buckleton and Dr. Curran revisited the 0.97 function in a non-peer-reviewed note (August 15, 2022 tr at 1452-1453). This paper was written because Dr. Buckleton was concerned about a number of significant misstatements that were being made in the public domain about FST, mostly in New York State by people working for the defense community (August 15, 2022 tr at 1453-1454). Dr. Buckleton thought the 0.97 function was not deliberately hidden in the code because there was a very clear announcement of it in the code (August 15, 2022 tr at 1458). Rather, Dr. Buckleton believed it was plausible that it was excised by editing from the published paper during late editing. Id. In this case, Dr. Buckleton had reviewed the electropherograms in the defendant's case file and determined that the 0.97 function was not triggered here (August 15, 2022 tr at 1458-1459).
Dr. Buckleton testified that he knew Nathaniel Adams who is a scientist working as a commentator regarding probabilistic genotyping (August 15, 2022 tr at 1459). Mr. Adams is employed by Bioinformatics in Ohio, which is not quite majority owned by Dr. Dan Krane. Id. Dr. Buckleton was aware that Mr. Adams had been involved in reviewing the source code for FST and STRmix (August 15, 2022 tr at1459-1460). As far as Dr. Buckleton was aware, Mr. Adams had not made any negative comments about FST's code itself (August 15, 2022 tr at 1460). Mr. Adams had criticized FST for falsifying documentation regarding validation and disclosure. Id. Mr. Adams’ primary critique related to the lack of announcement of this particular function and lack of documentation of the performance check afterwards. Id.
Dr. Buckleton testified that Mr. Adams had also made critical comments of both FST and STRmix regarding his perceived view of their lack of adherence to the Institute of Electrical and Electronics Engineers ("IEEE") standard. Id. IEEE is an institute with about 400,000 members in the United States, and they make guidance covering virtually the universe of software engineering (August 15, 2022 tr at 1461). Dr. Buckleton testified that IEEE has nothing specific to do with forensic science (August 15, 2022 tr at 1463). One of the IEEE standard Mr. Adams specially talks about is the separation of testing from coding, meaning testing to be done independently (August 15, 2022 tr at 1461). This is a technical separation between the coders and the testers (August 15, 2022 tr at 1462). The standard also mentions managerial and fiscal independence, meaning there is no incentivization to pass or fail the software. Id. As for FST, it was Dr. Buckleton's understanding that OCME contracted with a private company, Sapphire, to do the coding, although the Sapphire coding engineers were actually imbedded at OCME. Id. However, in Dr. Buckleton's opinion, OCME did have the managerial and fiscal independence in place, so it was probably close to conforming to the IEEE standard. Id. In any event, there are no forensic regulations or requirements that required that the IEEE standard be followed (August 15, 2022 tr at 1463).
Dr. Buckleton testified that source code errors are found by testing the program, not by reading the codes (August 15, 2022 tr at 1465-1466). Dr. Buckleton noted that the locus dropping function with FST was not an error, but a deliberate conclusion (August 15, 2022 tr at 1466). Dr. Buckleton stated that his company has disclosed 14 miscodes in the system software, which were all detected by testing (August 15, 2022 tr at 1466-1467). Even for programs that are open source (i.e., LabRetriever and EuroForMix), errors were found by testing them (August 15, 2022 tr at 1467). When an error was found on STRmix, a mini-validation was conducted. Id. In Dr. Buckleton's expert opinion, code review alone is not an effective solution to identifying problems within a code. Id. While it is possible, Dr. Buckleton has never seen code review finding a fault, whereas he has massive instances of testing finding an error. Id.
Dr. Buckleton was aware that OCME performed manual comparisons when evaluating a DNA profile prior to running the FST software (August 16, 2022 tr at 1470-1471). He also stated that OCME did not run FST on samples that were deemed excluded or inconclusive (August 16, 2022 tr at 1471). In Dr. Buckleton's opinion, manual exclusions would not take place in a non-contributor study like the one OCME conducted for the FST validation because to do tens of thousands of samples manually would be massively time consuming. Id. In Dr. Buckleton's opinion, a non-contributor would usually give a likelihood ratio below one, but sometimes would be above one (August 16, 2022 tr at 1472). Likewise, a true contributor can give a likelihood ratio below one. Id. Dr. Buckleton testified that this is a limitation of DNA itself, and it has always been understood that some non-donors could match by chance. Id. Dr. Buckleton has reviewed the certified OCME documents in connection to this case and he has performed calculations using the data from this case with a different probabilistic genotyping software, LRmix (August 16, 2022 tr at 1473-1474). LRmix is a probabilistic genotyping software developed by Dr. Peter Gill and Dr. Hilda Haned (August 16, 2022 tr at 1474). Dr. Buckleton picked LRmix because it is very similar with FST and knew how to use it. Id. In doing this, Dr. Buckleton did not calculate the drop-out rate using LRmix, and instead he did it separately in Excel using a maximum likelihood estimation (August 16, 2022 tr at 1474-1475). Dr. Buckleton testified that the drop-out probabilities used by FST were lower than Dr. Buckleton's estimates, and FST's drop-out rate would produce a more conservative likelihood ratio (August 16, 2022 tr at 1475). In Dr. Buckleton's opinion, the drop-in rate used by OCME is a higher range compared to other programs because they encountered stutter peaks (August 16, 2022 tr at 1477). Dr. Buckleton testified that having a higher drop-in probability does not create false positives. Id. Dr. Buckleton also used LRmix while doubling the drop-in rate and halving the drop-in rate (August 16, 2022 tr at 1478). In all the drop-out and drop-in experiments, the results showed that FST's estimate was conservative. Id. The likelihood ratio given by using LRmix was higher than the likelihood ratio of 476 million given by FST in this case. Id.
Furthermore, in this case, Dr. Buckleton found that OCME followed their own protocols when assigning this as a two-person mixture (August 16, 2022 tr at 1479). After independently evaluating the data and estimating the number of contributors, Dr. Buckleton concluded that the mixture was either a two-person mixture with a bit of drop-in or a three-person mixture with a lot of drop-out. Id. This was because there was a minor indication of three at one locus. Id. Using Dr. Buckleton's preferred parameters for drop-out, the likelihood ratio is 3.2 billion for two-person mixture and 2.2 billion for three-person mixture. Id. Therefore, FST produced a more conservative likelihood ratio than LRmix (August 16, 2022 tr at 1479-1480).
Dr. Buckleton testified that while they are making every effort they can to get a n accurate likelihood ratio, they do not know the exact likelihood ratio (August 16, 2022 tr at 1480). There are certain things they cannot and do not know about a profile, and therefore, they sort of express a range of possible interpretations. Id. In Dr. Buckleton's opinion, from looking at the possible interpretations in his hands, FST's estimate was conservative, which will favor the defendant. Id.
In Dr. Buckleton's opinion, probabilistic genotyping programs that perform DNA mixtures and generate likelihood ratios in forensic case work are generally reliable and acceptable within the relevant scientific community. Id. In addition, it was Dr. Buckleton's opinion that semi-continuous probabilistic genotyping software such as FST was suitable to be used in forensic case work (August 16, 2022 tr at 1481). Furthermore, it was Dr. Buckleton's opinion that the methods used by FST are reliable and generally accepted in the scientific community. Id.
Sometime around 1999 when Dr. Buckleton was at FSS, Dr. Buckleton and Dr. Gill wrote a paper on the basis of the drop model (August 16, 2022 tr at 1484). The basis included the possible genotype combinations of a mixture sample including incorporating drop-out and drop-in. Id. Not everyone in the FSS agreed with the drop model, including Dr. Ian Evett (August 16, 2022 tr at 1485). Dr. Buckleton testified that Dr. Evett is a pioneer in the use of forensic likelihood ratios and was an early teacher of Dr. Buckleton. Id . In 1998, Dr. Evett wrote a paper on the use of peak heights and DNA mixture analysis, where he opposed the use of the drop model (August 16, 2022 tr at 1486). In 2005, Dr. James Curran wrote a paper about LoComatioN, a tool he made to provide a statistical expression of the evidence for low copy number testing (August 16, 2022 tr at 1486-1487). By this time, Dr. Buckleton had left FSS, but he was aware that Dr. Evett had strongly advised FSS against the use of LoComatioN, and therefore, it was not used (August 16, 2022 tr at 1487-1488). Dr. Buckleton testified that around this time, Dr. Evett had great influence regarding whether or not a drop model should be used (August 16, 2022 tr at 1488). As far as Dr. Buckleton knew, Dr. Evett was an opponent of the drop-in but an advocate of continuous models like STRmix (August 16, 2022 tr at 1489).
Back in 1999, Dr. Mark Perlin started developing TrueAllele (August 16, 2022 tr at 1490). In its first conception, TrueAllele was for single source materials, and it was in the mid-2000s that Dr. Perlin started thinking about mixtures. Id. There are some similarities between STRmix and TrueAllele. Id. A central element of STRmix's algorithm is the Markov chain Monte Carlo ("MCMC") based on the Metropolis Hastings Algorithm (August 16, 2022 tr at 1491-1492). TrueAllele also has
an algorithm based upon MCMC and based on the Metropolis Hastings (August 16, 2022 tr at 1492). Dr. Perlin has said that TrueAllele used all of the peak information in the electropherogram and took into account PCR and instrument variation. Id. Dr. Buckleton did not recall Dr. Perlin criticizing the drop model (August 16, 2022 tr at 1495).
In 2000, Dr. Gill, Dr. Buckleton and others authored an article titled, "An Investigation of the Rigor of Interpretation Rules for STRs Derived from Less than 100 pg of DNA" (August 16, 2022 tr at 1496-1497). In 2001, Dr. Budowle co-authored an article opposing the use of increased amplification for LCN (August 16, 2022 tr at 1503). Dr. Budowle was in opposition to the technique that Dr. Buckleton and his co-authors were proposing (August 16, 2022 tr at 1503-1504). Although Dr. Budowle was a leading scientist at the FBI at the time, Dr. Buckleton believed that Dr. Budowle's article had a disclaimer that the views in the article were his private opinion (August 16, 2022 tr at 1504). Dr. Budowle then went to the University of North Texas Health Science Center where low copy number testing was investigated but was not used for criminal court use (August 16, 2022 tr at 1505).
Dr. Tvedebrink Torben is a Danish mathematician, and Dr. Buckleton has co-authored papers with him on issues involving modeling of allelic drop-out and degradation and its effect on allelic drop-out (August 16, 2022 tr at 1519-1520). Dr. Buckleton also co-authored papers with Dr. Curran, Dr. Bright and Dr. Taylor on degradation (August 16, 2022 tr at 1520). In each of these papers, Dr. Buckleton tested the efficacy of an exponential curve in peak heights to determine drop-out. Id. Dr. Buckleton testified that this is a strategy that has been endorsed by other scientists in the field (August 16, 2022 tr at 1521). In the design of drop-out for FST, OCME used a linear interpolation between different quant values (August 16, 2022 tr at 1522). As far as Dr. Buckleton was aware, FST was the only drop model that used a linear interpolation for estimating the probability of drop-out (August 16, 2022 tr at 1523). However, Dr. Buckleton testified that there was no connection between degradation and interpolating the quant. Id.
Dr. Buckleton testified that he has not run FST (August 16, 2022 tr at 1531). Dr. Buckleton's knowledge of FST comes from peer-reviewed articles and reviewing large parts of the source code and his brief conversation with Dr. O'Connor (August 16, 2022 tr at 1531-1532). Dr. Buckleton testified that he can critique any software, including FST, but stated that the question here is whether it was acceptable, not whether it is the gold standard (August 16, 2022 tr at 1553).
(4) Dr. James Curran
Dr. James Curran was deemed qualified as an expert in the field of forensic statistics, interpretation of DNA evidence, interpretation of DNA mixtures and probabilistic genotyping software (August 17, 2022 tr at 1560). Dr. Curran is employed by the University of Aukland in Aukland, New Zealand in the department of statistics (August 17, 2022 tr at 1562). He is a professor of statistics and is currently the head of the department of statistics. Id. Prior to working at the University of Aukland, Dr. Curran worked at the University of Waikato (August 17, 2022 tr at 1563). While Dr. Curran was at Waikato and during part of his time at Aukland, he was contracted by the FSS in the UK primarily in the areas of software development and innovation for methods of interpreting DNA. Id. In conjunction with that, he also worked with ESR in New Zealand in developing software methodology for interpretation of evidence, which included DNA evidence. Id. Dr. Curran is not a professional programmer, but he has been programing since he was 11 years old, which was 41 years ago, and he has experience in helping writing software at FSS and ESR (August 17, 2022 tr at 1563-1564).
Dr. Curran's professional experience in interpretation of DNA evidence started around 1996 or 1997 when he started working with Dr. Buckleton on methods for interpreting mixtures (August 17, 2022 tr at 1566). At the time, the methodology was not well understood, and it was not commonly used. Id. Dr. Curran also has experience in probabilistic genotyping software programs, and he is the principal author of LoComatioN, a semi-continuous DNA interpretation method (August 17, 2022 tr at 1564, 1566). In addition, Dr. Curran contributed very small parts to at least the methodology of STRmix (August 17, 2022 tr at 1566). Dr. Curran has also written peer-reviewed articles regarding likelihood ratios and mixture interpretation. Id.
In 1999, Dr. Curran published an article in the Journal of Forensic Sciences regarding the interpretation of DNA mixtures (August 17, 2022 tr at 1567). The co-authors of that article were Chris Triggs, John Buckleton and Bruce Weir. Id. Dr. Triggs and Dr. Buckleton were Dr. Curran's PhD advisors, and Dr. Weir was Dr. Curran's post-doctoral advisor (August 17, 2022 tr at 1568). Dr. Weir was the chair of the NYS DNA Subcommittee at one point. Id. One of the objectives of the article was to provide a blueprint in implementing mathematical calculations in a probabilistic genotyping software (August 17, 2022 tr at 1568-1569). The method described in this article lies at the heart of nearly every probabilistic genotyping software because at some point, this calculation has to be done (August 17, 2022 tr at 1569). FST, STRmix, LikeLTD, LabRetriever and DNAView are some of the programs that have utilized this mathematical equation, which include semi-continuous and continuous models. Id.
Random Man Not Excluded ("RMNE") is a statistic which computes a portion of the population that would not be excluded as a contributor to the crime scene stain under certain assumptions (August 17, 2022 tr at 1570). In 2008, Dr. Curran co-authored an article with Dr. Buckleton in the Forensic Science International Genetics comparing the RMNE and CPI to the use of likelihood ratios in DNA mixture interpretations. Id. At the time, in the United States, RMNE was the method used predominantly and there were a lot of resistance to the likelihood ratio as a method of interpreting DNA mixtures. (August 17, 2022 tr at 1571). The objective of this article was to objectively look at the difference between the two statistics, what they did and did not do and what the advantages of each were (August 17, 2022 tr at 1571-1572). In Dr. Curran's view, use of the likelihood ratios in mixture interpretation was an advancement over the use of the RMNE method, using more available information to answer the question that the courts were interested in (August 17, 2022 tr at 1572).
The article stated that, at the time, most labs left out any locus that may have dropped out where the suspect had an allele that is not present in the mixture, and advised that any locus that the scientist intends to leave out be critically examined before the decision is made (August 17, 2022 tr at 1590). This is a practice that does not happen that often anymore, and this technique could either increase or decrease the statistic (August 17, 2022 tr at 1590-1591). Dropping a locus is a feature of the FST software, but not of LoComatioN, STRmix, TrueAllele, LabRetriever or Lira.
Over the past decade, there has been continued improvements within the forensic science community regarding probabilistic genotyping software. Id. Dr. Curran testified that they are now able to take into account more information that is available, primarily from the electropherogram, but also from the expert knowledge that they have from the forensic DNA community and refine the software (August 17, 2022 tr at 1573). Dr. Curran believed that now likelihood ratios is the method of choice for DNA interpretation in the United States. Id. Dr. Curran testified that in his view, development of fully continuous programs does not make semi-continuous programs unreliable, and the methodology still stands. Id. Semi-continuous methods may become obsolete, but the use of fully continuous methods still is somewhat burdensome including some financial constraints. Id. Therefore, in Dr. Curran's opinion, semi-continuous methods are not completely obsolete. Id.
In 2014, Dr. Curran co-authored an article with Dr. Buckleton and two PhD students, Hannah Kelly and Jo Bright, in the Journal of Science and Justice explaining the difference between the types of interpretation models for DNA that have been used over time (August 17, 2022 tr at 1574-1575). Back in 2008, there were only a couple of probabilistic genotyping software available in the market, including TrueAllele and LikeLTD (August 17, 2022 tr at 1576). At the time, TrueAllele was very expensive, and it also required a computing hardware in order for the computations to be visible. Id. In addition, it required continuous tech support from the developer (August 17, 2022 tr at 1577).
It was stated in this article that there was no consensus within the forensic biology community as to how complex mixtures and low template DNA profiles should be interpreted (August 17, 2022 tr at 1596). It was Dr. Curran's opinion that now the consensus is that fully continuous probabilistic genotyping is the way to interpret complex DNA mixtures. Id. As far as Dr. Curran was aware, TrueAllele was the first fully continuous program in the market back in 2008. Id. It was Dr. Curran's opinion that likelihood ratio is accepted to be the most powerful and relevant statistic used to calculate the weight of the DNA evidence (August 17, 2022 tr at 1600).
Dr. Curran was familiar with the NYC OCME and knew Dr. Adelle Mitchell (August 17, 2022 tr at 1578). In Dr. Curran's opinion, Dr. Mitchell is a member of the relevant scientific community. Id. Dr. Curran was also familiar with FST, which was developed by Dr. Mitchell. Id. Dr. Curran testified that he was aware that OCME was the first or one of the first labs in the United States to implement probabilistic genotyping or likelihood ratios, using the method Dr. Curran and Dr. Peter Gill described in their 2005 paper (August 17, 2022 tr at 1579). Dr. Curran was not involved in the development of FST (August 17, 2022 tr at 1580).
In 2005, Dr. Curran, Dr. Gill and Dr. Martin Bill wrote an article describing and formalizing the calculations for semi-continuous method for DNA mixture interpretation. Id. At the time, Dr. Gill and Dr. Bill were both working at the FSS in the UK (August 17, 2022 tr at 1581). In Dr. Curran's opinion, Dr. Gill was probably the world's foremost expert in DNA, both in the interpretation of DNA and forensic DNA biology. Id. In addition, Dr. Curran considered Dr. Gill a member of the relevant scientific community. Id . In around 2000, Dr. Gill and Dr. Buckleton had proposed a semi-continuous method for mixture interpretation, and this article formalized the mathematics required to interpret any DNA mixture and extended it to allow for any number of contributors (August 17, 2022 tr at 1582). A software was created implementing these calculations called LoComatioN, but it was never put in the market due to internal politics of FSS (August 17, 2022 tr at 1582-1583). It was brought out during cross-examination that Dr. Ian Evett, who is a prestigious member of this community, had opposed the implementation of LoComatioN (August 17, 2022 tr at 1603-1604).
In 2019, Dr. Curran co-authored an article examining FST with Julia Gasston, Maarten Kruijver, Jo-Anne Bright, Simone Pugh and John Buckleton (August 17, 2022 tr at 1584-1585). At the time, in the U.S., and predominately in the defense community, there were a lot of discussions about the reliability of forensic software (August 17, 2022 tr at 1585). Specifically, they were saying FST was not reliable because it did not adhere to certain engineering guidelines or software engineering guidelines and was not open to public scrutiny. Id. They were also arguing that FST was substantially prejudicial to defendants. Id. The article was written to address these issues and to see whether the software was uniformly disadvantageous to a defendant. Id. Dr. Curran downloaded FST software and looked at the source code but reprogrammed it instead of just running the program (August 17, 2022 tr at 1585-1586). Dr. Curran testified that in order to use a commercially viable software system, compilation and extra software are required, and oftentimes you cannot just download and run it (August 17, 2022 tr at 1586). So, the software was reprogrammed so it would be useful for the purpose of the research without using the user interface over and over again. Id. Dr. Curran testified that he did not reach out to OCME for assistance in getting FST installed, and he did not read the OCME protocols for this study (August 17, 2022 tr at 1611). In addition, this study did not use the allele frequencies employed by the OCME for FST (August 17, 2022 tr at 1612). However, in Dr. Curran's opinion, for the objective of this paper, which was to examinie the locus dropping function, not using the same allele frequency was not an issue (August 17, 2022 tr at 1615).
Dr. Curran's understanding of the 0.97 function of FST was that whenever there is a situation where the sum of the allele frequencies at a particular locus in a crime scene stain exceeds 0.97, that locus is omitted from calculation. Id. The conclusion of the article regarding the 0.97 function was that it was not uniformly prejudicial to the defendant (August 17, 2022 tr at 1586-1587). The consensus among the co-authors were that FST was reliable and that it is not unduly affected by this function (August 17, 2022 tr at 1587).
Subsequently, Dr. Curran and Dr. Buckleton wrote a follow-up article on FST. Id. At the time, Dr. Buckleton was involved in extensive court work regarding the reliability of probabilistic genotyping software, and this article was written to explain the relevant details for people wanting to understand the issues quickly (August 17, 2022 tr at 1588). In Dr. Curran's opinion, the 0.97 function was not a hidden function because there were comments explaining what the function exactly did (August 17, 2022 tr at 1588). In addition, if it were a hidden function, one would have made the code quite hard to understand, but that was not the case here. Id.
In Dr. Curran's expert opinion, FST software is a semi-continuous probabilistic genotyping program that is suitable to be used in forensic casework. Id. In addition, it was Dr. Curran's opinion that the methods used in FST are considered reliable and generally accepted in the scientific community, and they are still used in the interpretation of DNA to this date. Id.
It was brought out during cross-examination that Dr. Curran was aware that Dr. Bruce Budowle, Dr. Ranajit Chakraborty and Dr. Ian Evett did support the use of the likelihood ratio in forensic DNA analysis (August 17, 2022 tr at 1601-1602, 1604). In addition, Dr. Curran was aware that Dr. Budowle and Dr. Evett supported the use of fully continuous probabilistic genotyping programing (August 17, 2022 tr at 1601, 1604). Defense's Witnesses
(1) Dr. Dan Krane
Dr. Dan Krane is a professor of biological sciences at Wright State University in Dayton, Ohio, and also has a courtesy appointment in the department of computer science (August 25, 2022 tr at 1623). Dr. Krane has a double major in biology and chemistry from John Carroll University in Cleveland, Ohio (August 25, 2022 tr at 1624). Dr. Krane earned his PhD in biochemistry from the cell and molecular biology program at Penn State University and State College, Pennsylvania. Id. Dr. Krane did his post-doctoral research, first in the genetics department of the medical school at Washington University in St. Louis, and then later in the department of organismic and evolutionary biology at Harvard University. Id. Over the past 20 to 30 years, Dr. Krane has published in a variety of peer-reviewed journals on various topics, including the application of population genetics and molecular biology in the context of forensic DNA profiling (August 25, 2022 tr at 1626). Dr. Krane's publications included estimations and approaches for determining the number of contributors to mixed samples, problems and potentials for problems associated with examiner bias, context effect or other matters such as the variability and the measurement of the heights of peaks as it relates to the quantity of DNA that was used to amplify the DNA profile (August 25, 2022 tr at 1626-1627). Dr. Krane has also published in the field of molecular biology (August 25, 2022 tr at 1627). Dr. Krane has been appointed to the Virginia Scientific Advisory Committee, which oversees the policies and practices of the Virginia Department of Forensic Science, and was involved in validating certain testing (August 25, 2022 tr at 1631-1632).
Dr. Krane is the co-founder, president and the chief executive officer of a company called Forensic Bioinformatics ("Bioinformatics"), which is a consulting company (August 25, 2022 tr at 1632-1633). Bioinformatics was founded in April of 2002 (September 23, 2022 tr at 1893). Bioinformatics has two full-time employees and a number of part-time employees. Id. The full-time employees are Mr. Nathaniel Adams and Ms. Carrie Roland (August 25, 2022 tr at 1633). Mr. Adams is a systems engineer and Ms. Roland is an analyst (September 23, 2022 tr at 1894). The company was incorporated in the state of Ohio, but in 2018, it registered as a foreign corporation in the state of New Jersey (September 23, 2022 tr at 1902). Mr. Clinton Hughes, one of the defense counsels in this case, was the registered agent for that foreign corporation in New Jersey (September 23, 2022 tr at 1903). The company provides assistance to clients interested in having an objective review of DNA test results, typically in the context of a criminal trial. Id. Most of the customers, almost without exception, were criminal defense attorneys. Id. There was one federal case in Michigan, United States v. Gissantaner , which involved the use of STRmix, where Dr. Krane was retained by the federal court as an independent expert (September 23, 2022 tr at 1916-1917). In that case, Dr. Krane's employee, Mr. Adams was retained by the defense (September 23, 2022 tr at 1917). When Gissantaner was appealed, the U.S. Court of Appeals for the Sixth Circuit stated in an opinion that the fact that Dr. Krane was the president of the firm that employed the defendant's expert did not prohibit Dr. Krane from offering expert opinion about STRmix on behalf of the defendant, but it meant "he should not have been treated as independent rule 706 expert in this case." (September 23, 2022 tr at 1923-1924).
Dr. Krane is a co-inventor of GenoPhiler, which performs automated reviews of large amounts of material that are provided in the context of a forensic DNA profile analysis. Id. In addition, Dr. Krane's company developed a program called GenoStat, which attached statistical weights to DNA profiles. Id. Dr. Krane has never worked in a public crime laboratory and has never generated DNA profiles from crime scene samples (August 25, 2022 tr at 1634, 1639). Dr. Krane has generated a DNA profile in a non-criminal context a number of times (August 25, 2022 tr at 1634). Dr. Krane has reviewed over 10,000 forensic DNA cases from a large number of different laboratories (August 25, 2022 tr at 1634-1635).
Dr. Krane was retained for his expert opinion in this case by the defendant and was compensated (September 23, 2022 tr at 1926-1928). Mr. Adams was also expected to be compensated for his work on this case (September 23, 2022 tr at 1928).
Dr. Krane was deemed an expert in the areas of molecular biology, bioinformatics and the general area of forensic DNA analysis and forensic DNA interpretation (August 25, 2022 tr at 1650-1651). However, he was not deemed an expert in probabilistic genotyping software or on LCN DNA testing. Id.
A forensic DNA mixture is a DNA sample that contains DNA or a contribution to DNA from two or more individuals (August 25, 2022 tr at 1658). Generally speaking, the larger the number of contributors and smaller the quantity of DNA, the more challenging the interpretation becomes (August 25, 2022 tr at 1659). The PCAST report stated that the most widely used approach for attaching a statistical weight to mixed DNA samples at the time of the report, which was seven or so years ago, was not foundationally valid and that statistic, specifically, was something that is and was known as the combined probability of inclusion (August 25, 2022 tr at 1660-1661). Dr. Krane agreed with the PCAST report that probabilistic genotyping showed promise as a means of addressing this concern. Id. In addition, Dr. Krane agreed that some probabilistic genotyping approaches have been demonstrated to be foundationally valid for a narrowly defined range of samples: samples that contained DNA from three or fewer individuals, samples where each of the contributors was contributing an appreciable amount of DNA, and samples where the mixture ratios, the relative proportion of the contributions were within a fairly narrow range. Id. It was brought out during cross-examination that while Dr. Krane agreed with the PCAST report that random match probability, referred to as RMP or CPI, was not statistically valid, the website for Forensic Bioinformatics has a program called GenoStat available for users, which is in the form of RMP or CPI (September 23, 2022 tr at 1937-1940). GenoStat has an option of letting the user choose whether to include CPI calculations or exclude them from the final report (September 23, 2022 tr at 1943).
The PowerPoint used by Dr. O'Connor on FST during his testimony stated that 0.03 percent of all non-contributor comparisons generated a likelihood ratio greater than one (August 25, 2022 tr at 1663). Dr. Krane testified that this means there was a false inclusion rate of 0.03 percent which, generally speaking, is pretty good (August 25, 2022 tr at 1664). However, Dr. Krane testified that this is not necessarily applicable to every case (August 25, 2022 tr at 1664-1665). The term factor space is used to refer to parameters that need to be taken into consideration that could influence the likelihood ratios that a program like FST might generate (August 25, 2022 tr at 1665-1666). Some examples include the number of contributors or the relative contribution of the contributors (August 25, 2022 tr at 1666). Dr. Krane testified that a laboratory's validation work should explore these factor spaces with the intent of establishing boundaries. Id.
In 2005, Dr. Krane co-authored an article in the Journal of Forensic Science titled, "Empirical Analysis of the STR Profiles Resulting from Conceptual Mixtures" (August 25, 2022 tr at 1668). For this article, a thousand different real world DNA profiles were used to conceptually create all possible two, three and four person mixtures, and then looked to see what alleles would be seen in the DNA mixture (August 25, 2022 tr at1669). The study found that there was a significant risk of underestimating the number of contributors to mixed samples using a method that could be conveniently described as allele counting (August 25, 2022 tr at 1669-1670). In addition, the study found that the risk of underestimating increased very dramatically as the number of known contributors to the mixture increased (August 25, 2022 tr at 1670). It was Dr. Krane's opinion that the risk of underestimating or undercounting the number of contributors to a mixture that applied in 2005 was still just as appliable today (August 25, 2022 tr at 1670-1671). Dr. Krane testified that stochastic effects unequivocally complicate the interpretation of DNA test results (August 25, 2022 tr at 1673). For instance, allelic drop-out would exacerbate the problem of undercounting or underestimating the number of contributors to a mixture. Id. In addition, with lower-level samples where suboptimal amounts of template DNA are available, the peak height information would no longer be a useful tool or guide (August 25, 2022 tr at 1673-1674). Dr. Krane testified that using the test kit that was used by OCME in this case, his study showed that there was a significant risk of underestimating the number of contributors to a known four person mixtures using the allele counting approach (August 25, 2022 tr at 1677-1678). The results showed that there was approximately a 68 percent chance of underestimating for Caucasians, and the risk was greater for Asians and less for African Americans (August 25, 2022 tr at 1678-1679).
It was brought out during cross-examination that this study did not take into account stutter, drop-in, drop-out or contamination (September 23, 2022 tr at 1955). Based on Dr. Krane's experience, no accredited laboratory solely uses allele counting method (September 23, 2022 tr at 1958). Dr. Krane testified that just looking at the allele count and looking at nothing else, you would miss some helpful information, but many laboratories rely only on the information from the number of alleles. Id. In this case, Dr. Krane found no documentation in the forensic statistics comparison report that suggests OCME used anything other than an allele counting approach (September 23, 2022 tr at 1958-1959). Dr. Krane saw no documentation that electropherograms in the OCME's report were incorporated into OCME's assessment of the number of contributors (September 23, 2022 tr at 1959). Dr. Krane testified that OCME's protocols give analysts a very wide latitude for interpretation (September 23, 2022 tr at 1960). For instance, OCME protocols stated that the amount of DNA amplified should be considered but gives no guidance about how that consideration should be made (September 23, 2022 tr at 1961).
Dr. Krane testified that as one would expect to see one or two peaks at any given location, for example, if there were seven alleles at a certain locus, the conclusion would be that the mixture was of at least four individuals (August 25, 2022 tr at 1679-1680). However, in this example, OCME's interpretation guidelines suggested that only if you see at least two loci with seven alleles would you conclude that it was a four-person mixture (August 25, 2022 tr at 1680-1681). In Dr. Krane's opinion, by adopting an approach where you set aside the information from the locus with the largest number of alleles, the mischaracterization rate becomes very much exacerbated such that less than five percent of known four person Caucasian mixtures would actually be characterized as a four-person mixture (August 25, 2022 tr at 1680). Dr. Krane testified that the determination of the number of contributors is foundationally important because for programs like FST, the analyst is required to enter into the program the number of contributors to a mixed sample (August 25, 2022 tr at 1681). Dr. Krane was not aware of any laboratory other than OCME that takes the approach that there needs to be two loci with the largest number of alleles (August 26, 2022 tr at 1685). In Dr. Krane's opinion, OCME's approach of using two loci was not a generally accepted way to determine the number of contributors to a sample (August 26, 2022 tr at 1686).
In this case, OCME indicated in FST that this was a two-person mixture (August 26, 2022 tr at 1686). However, Dr. Krane testified that looking at the DNA profiles, there was at least one location where there are a total of six alleles, which would indicate that the sample originated from at least three individuals (August 26, 2022 tr at 1687-1688). In addition, there were a number of loci where all three replicates had no test results meaning that there were no alleles detected that were suitable for interpretation (August 26, 2022 tr at 1688-1689). This is an indication of locus and allelic drop-outs (August 26, 2022 tr at 1689). Dr. Krane testified that locus drop-out is something that should be taken into consideration in determining the number of contributors (August 26, 2022 tr at 1690). In reviewing the OCME file, Dr. Krane did not find any indication that anything other than allele counting was used to determine the number of contributors (August 26, 2022 tr at 1691). Dr. Krane would characterize this case as a mixture of at least three individuals, and stated that it could easily be a mixture of four individuals with a fair amount of drop-out. Id.
Dr. Krane has seen laboratories other than OCME using a DNA interpretation method which called for not using DNA information that is present in an electropherogram (August 26, 2022 tr at 1692). Dr. Krane testified that while this was a fairly common practice until about five years ago, it was his opinion that it was now widely recognized as not an appropriate approach (August 26, 2022 tr at 1693). Dr. Krane agreed with the statement in an article written by Dr. Buckleton and Dr. James Curran that at the time of creation of the FST software, there was a widespread misunderstanding that omitting loci was conservative, but testified that this view was incorrect (August 26, 2022 tr at 1694). Dr. Krane testified that ignoring loci containing DNA information when performing statistical calculations is no longer considered best practice (August 26, 2022 tr at 1694-1695).
In this case, the quantitation of DNA from the crime scene sample was 19 picograms of DNA (August 26, 2022 tr at 1695). If this was a four-person mixture, each individual would have given something less than 19 picograms (August 26, 2022 tr at 1696). Identifiler was the DNA test kit used to generate the DNA profiles in this case by OCME. Id. The manufacturer of Identifiler recommended that optimum results can be obtained from using one nanogram of template DNA. Id.
A picogram is one thousandth of a nanogram.
OCME used the quantification value to measure drop-out, and as far as Dr. Krane was aware of, the Austin Police Department laboratory was the only other laboratory that did this, but that lab has since been shut down (August 26, 2022 tr at 1696-1697). In Dr. Krane's opinion, measuring drop-out with the quantification value was not appropriate and was not an approach generally accepted for measuring drop-out (August 26, 2022 tr at 1697-1699).
OCME validated FST with two and three person DNA mixtures down to 25 picograms, and then extrapolated drop-out values for mixture below 25 picograms (August 26, 2022 tr at 1700). However, Dr. Krane testified that extrapolation is inappropriate in establishing the reliability of a methodology in forensic DNA profiling during a validation study. Id. Dr. Krane testified that validation work should establish the boundaries beyond which its results should not be relied upon, and it is unsafe, unsound and scientifically inappropriate to use extrapolation to determine the reliability of the outside range of samples that were tested during the course of validation (August 26, 2022 tr at 1700-1701). Dr. Krane testified that what FST does with drop-out rates below 25 picograms can be characterized as extrapolation (August 26, 2022 tr at 1703).
Dr. Krane testified that he agreed with a statement made by Scott Hodgson, who worked for OCME in 2008, "that DNA testing could be done on victim fingernail clippings, but suspect could still offer innocent explanations," because presence of a DNA profile says nothing about how or when that DNA came to be associated with the sample that was tested (August 26, 2022 tr at 1703-1704).
Dr. Krane testified that the true number of contributors to a crime scene mixture sample could never be known (September 23, 2022 tr at 1948-1949). Dr. Krane also testified that while there have been extensive studies on stochastic effects in the context of DNA profiling, there is still quite a bit of work that needs to be done (September 23, 2022 tr at 1949). Dr. Krane agreed that an analyst should not ignore the risk of stochastic effects when estimating the number of contributors, particularly in a low copy sample. Id.
Dr. Krane testified that he mostly agreed with Dr. Buckleton's conclusion that the fingernail mixture in this case was either a two-person mixture with lots of drop-in or a three-person mixture with lots of drop-out (September 23, 2022 tr at 1957). However, in Dr. Krane's opinion, this sample clearly has lots of drop-out regardless of the number of contributors (September 23, 2022 tr at 1981).
(2) Dr. Jeanna Matthews
Dr. Jeanna Matthews has a master's and a PhD in computer science from the University of California, Berkeley, and a bachelor's degree in math and computer science from the Ohio State University (September 13, 2022 tr at 1729). The title of Dr. Matthew's dissertation was "Improving File System Performance with Adaptive Methods." Id. Dr. Matthews was awarded a Science Foundation Graduate Research Fellowship and an Intel Foundation Graduate Fellowship (September 13, 2022 tr at 1730). Dr. Matthews was a fellow at the Data & Society Research Institute during her sabbatical year, which is a research institute focused on the intersection of computing and its impact on society (September 13, 2022 tr at 1730-1731). During this fellowship, Dr. Matthews met Jessica Goldwaithe and Nathanial Adams and became aware of probabilistic genotyping software, and FST in particular, and began a multi-disciplinary collaboration with them that was awarded a Brown Institute Magic Grant (September 13, 2022 tr at 1731). The Brown Institute is a collaboration between Stanford Engineering and Columbia University's School of Journalism, and the study was on the impact of computing on investigative journalism, and in particular, investigating probabilistic genotyping software systems. Id. Dr. Matthews is currently a professor at Clarkson University in the department of computer science (September 13, 2022 tr at 1732). Dr. Matthews helped Clarkson University found a PhD program in computer science. Id. Dr. Matthews had published approximately 100 scientific articles, at least fifty of which were peer reviewed (September 13, 2022 tr at 1737-1738). Dr. Matthews was the lead author of two articles about FST in the Association for the Advancement of Artificial Intelligence Guide and the Association for Computing Machinery (September 13, 2022 tr at 1738). Dr. Matthews has published in scientific journals and published books (September 13, 2022 tr at 1739). She has also written about probabilistic genotyping software (September 13, 2022 tr at 1740). Dr. Matthews was a Fulbright scholar in Columbia, and visited five different universities in Columbia and spoke with them about their computer science programs (September 13, 2022 tr at 1740-1741). Dr. Matthews was one of the founding editors of AI & Ethics (September 13, 2022 tr at 1741). Dr. Matthews testified in U.S. v. Cortorreal as an expert in the field of computer science and software engineering (September 13, 2022 tr at 1742).
Dr. Matthews was deemed qualified as an expert in the area of computer science and software engineering (September 13, 2022 tr at 1748).
The Institute of Electrical and Electronics Engineers ("IEEE") is an international professional society in engineering that covers a wide range of engineering disciplines, including computer engineering, electrical engineering, software engineering and others, with approximately 400,000 members around the world (September 13, 2022 tr at 1748-1749). IEEE Standard 1012 is a standard for system, software and hardware verification and validation (September 13, 2022 tr at 1749). IEEE recommends a different level of verification and validation of systems depending on the impact or consequences the system will have in society (September 13, 2022 tr at 1752). The consequences in society are divided into four categories: catastrophic, critical, marginal and negligible. Id. IEEE recommends that if the behavior of the system causes catastrophic consequences occasionally or critical consequences probably (designated as "level four"), there should be a technically, managerially and financially independent verification and validation (September 13, 2022 tr at 1752-1753). Technical independence means that the individuals involved in the development and design of the system should not be the ones doing the verification and validation (September 13, 2022 tr at 1754). Managerial independence is that the managerial oversight of the development and the testing of the system should be separated, meaning the validation is not reported to the people that have a vested interest in showing that the system is successful (September 13, 2022 tr at 1755-1756). Financial independence means money saved from verification and validation should not be diverted into the budget for developing the system (September 13, 2022 tr at 1755). Dr. Matthews’ expert opinion was that probabilistic genotyping, including FST, should be placed at integrity level four since going to prison for many years would be extensive financial or social loss (September 13, 2022 tr at 1756, 1759).
Stress testing in the context of IEEE Standard 1012 is testing designed to push a system to its boundaries and document the boundaries of the system in terms of accuracy and good performance (September 13, 2022 tr at 1757-1758). Dr. Matthew's team was able to get FST up and running, but it was a bit difficult to work with it and it continued to be a bit fragile (September 13, 2022 tr at 1759). Mr. Adams was instrumental in getting FST up and running for the first time. Id. In Dr. Matthews’ opinion, Mr. Adams is an expert in the category of probabilistic genotyping software systems (September 13, 2022 tr at 1767). It was stated in the 2016 ISFG DNA Commission paper that while international industry standards applied to software validation, verification and test documentation, these standards can be simplified and extrapolated to forensic genetics (September 13, 2022 tr at 1760). However, Dr. Matthews testified that IEEE 1012 applied to software and hardware systems broadly and can be applied directly to forensic genetics software (September 13, 2022 tr at 1760-1761). It was brought out during cross-examination that the New York State Forensic Commission, SWGDAM, the International Society for Forensic Genetics and the ANAB do not require probabilistic genotyping programs to meet the IEEE Standard 1012 (September 22, 2022 tr at 1865-1866). In addition, STRmix and TrueAllele do not conform to the IEEE Standard 1012 (September 22, 2022 tr at 1866-1867).
The FDA's principles of software validation stated that, "validation activities should be conducted using the basic quality assurance precept of independence of review. Self-validation is extremely difficult. When possible, an independent evaluation is always better, especially for higher risk applications. Some firms contract out for a third-party independent verification and validation, but this solution may not always be feasible," which Dr. Matthews agreed with the exception that she was not sure why this solution may not always be feasible (September 13, 2022 tr at 1761, 1764-1765). Dr. Matthews testified that in connection to integrity level four software as defined in IEEE Standard 1012, there are independent third parties that provide services of this kind (September 13, 2022 tr at 1765). The FDA software validation principles also stated that, "another approach is to assign internal staff members that are not involved in a particular design or its implementation, but who have sufficient knowledge to evaluate the project and conduct the verification and validation activities." (September 13, 2022 tr at 1766). However, Dr. Matthews testified that having internal staff members may be an example of technical independence, but it would not satisfy managerial or financial independence. Id. In addition, Dr. Matthews would not recommend this for integrity level four software as defined in IEEE Standard 1012 (September 13, 2022 tr at 1766-1767).
Dr. Matthews testified that FST is an artificial intelligence system when defining AI more broadly to include automated decision-making systems that are used to make big decisions about people's lives (September 13, 2022 tr at 1768). In 2021, IEEE USA AI Policy Committee wrote a letter to NIST regarding the DNA mixture interpretation and NIST scientific foundation review, and Dr. Matthews participated in the drafting of this letter (September 13, 2022 tr at 1769-1770). One of the points of the letter was that automated decision-making systems that impact life and liberty, such as probabilistic genotyping software, should be governed by the same rigorous standards and requirements as other critical software (September 13, 2022 tr at 1775-1776). The letter also stated that they agreed with NIST's observation that at the present time, there was not enough publicly available data to enable an external and independent assessment of the degree of reliability of DNA mixture interpretation practices, including the use of probabilistic genotyping software systems (September 13, 2022 tr at 1776).
During cross-examination, Dr. Matthews testified that FST would not fall under a narrower definition of AI as it does not use machine learning, neuronets or decision trees and does not have an aspect of sensing the environment (September 22, 2022 tr at 1835-1836). Dr. Matthews testified that she was aware that FST does include some human intervention, and that an analyst must make some sort of qualitative assessment about the data when they are using FST (September 22, 2022 tr at 1836). Dr. Matthews was also aware that after FST or probabilistic genotyping software is run, an analyst has to evaluate the data and the likelihood ratio statistic to see if the results comport with the observed data. Id. Dr. Matthews testified that probabilistic genotyping software programs like FST are designed to perform a large volume of arithmetic in order to help interpret forensic DNA mixtures. Id.
In 2019, Dr. Matthews co-authored an Artificial Intelligence Ethics and Society ("AIES") paper titled, "the Right to Confront Your Accusers: Opening the Black Box of Forensic DNA Software," with Clinton Hughes, Esq., one of the defense counsels in this case, Dan Krane, Nathanial Adams, Jessica Goldthwaite, Esq. of the Legal Aid Society and a few PhD and undergraduate students from Clarkson University and Iona College (September 13, 2022 tr at 1777-1778). Richard Torres, Esq., one of the defense counsels on this case, was acknowledged in this article (September 22, 2022 tr at 1843). For this article, FST source code on GitHub was compared to a version produced by the team where the FST's allele cap function was removed (September 13, 2022 tr at 1780-1781). The results showed that FST with the allele cap function skewed towards inaccuracy, which means that when you are looking at a true contributor to a mixture, it will err on the side of excluding them, and when you are looking at a non-contributor to a sample, it will incorrectly include them. Id. This would be a small example of a stress test, but not a stress test as defined in IEEE Standard 1012 (September 13, 2022 tr at 1781, 1789). The article also compared the likelihood ratio results of 28,000 non-contributor samples between FST with the allele cap function and FST without the function, and on each version, there were 23 false positives but were not the same 23 samples (September 13, 2022 tr at 1782). During cross-examination, Dr. Matthews testified that she was aware that Dr. Buckleton and Dr. Curran did a similar study and concluded that the allele cap did not favor either the defense or the prosecution (September 22, 2022 tr at 1853). In terms of overall statistics and with only using the OCME validation study, Dr. Matthews agreed with that conclusion (September 22, 2022 tr at 1853-1854). It was also brought out during cross-examination that of the 28,000 non-contributor comparisons done by Dr. Matthew's team, there was only one instance of a false inclusion in the strong support category and one instance of a very strong support category where the allele cap was on (September 22, 2022 tr at 1856-1859).
Sometime in 2018 or 2019, the team on this research received funding from the Brown Institute in order to study various probabilistic genotyping software (September 22, 2022 tr at 1838-1839, 1846). One of the individuals that applied for this grant was Surya Mattu, an investigative journalist working at Pro Publica, a publication that had written a critical article on OCME (September 22, 2022 tr at 1840-1841). Three of the six people on this team overlapped with the experts working with the defense in this case (September 22, 2022 tr at 1843).
In 2020, Dr. Matthews co-authors an article titled, "When Trusted Black Boxes Don't Agree: Incentivizing Iterative Improvement and Accountability in Critical Software Systems," with students from Clarkson University and Iona College and Jessica Goldwaithe, which was a peer-reviewed article from the AI Ethics and Society conference (September 13, 2022 tr at 1784-1785). In addition, while not listed as co-authors, Dan Krane, Nathaniel Adams and Clinton Hughes were consulted (September 13, 2022 tr at 1787). In this paper, four probabilistic genotyping software systems, FST with the allele cap function, FST without the allele cap function, LRmix and EuroForMix were compared (September 13, 2022 tr at 1787-1788). The study showed that these systems do have statistically significant differences (September 13, 2022 tr at 1788). In addition, OCME's verbal category for likelihood ratio results, which are limited support, moderate support, strong support and very strong support, was found to be at times more strongly worded than the SWGDAM standard. Id. For example, a result that would be labeled moderated for SWGDAM would be labeled strong for OCME (September 13, 2022 tr at 1788-1789).
Dr. Matthews testified that it is important that the people doing verification and validation of a software are incentivized or rewarded when they find problems (September 13, 2022 tr at 1791). Dr. Matthews testified that when errors are found, there should be incentives to fix them and not just avoid triggering them, and she believed this was what had happened with the FST allele cap (September 13, 2022 tr at 1791-1792). In addition, in Dr. Matthews’ opinion, FST was not an independently validated and verified software (September 13, 2022 tr at 1792). Dr. Matthews testified that as an integrity level four software, she would like FST to be held to a high standard of verification and validation (September 13, 2022 tr at 1792-1793).
It was brought out during cross-examination that Dr. Matthews was retained by Brooklyn Defender Services in May of 2022 (September 22, 2022 tr at 1797). Dr. Matthews had been paid $1,800 to work on this case and Dr. Matthews’ daughter, Abby Matthews, was paid $25.00 an hour for her work on this case (September 22, 2022 tr at 1799-1801). Dr. Matthews started working with Mr. Hughes in the fall of 2021 on this case (September 22, 2022 tr at 1801). The four witnesses called by the defendant testified at a Daubert hearing in the Southern District in July of 2022 regarding FST (September 22, 2022 tr at 1802-1803). It was also brought out during cross-examination that from the fall of 2021, Dr. Matthews had elaborated with Mr. Hughes about a design of a FST experiment in connection to the preparation of this case (September 22, 2022 tr at 1808-1809). Mr. Hughes had also provided Dr. Matthews with some raw data for this FST experiment (September 22, 2022 tr at 1814). Dr. Matthews testified that Mr. Hughes was an active participant in this FST research experiment (September 22, 2022 tr at 1822). Dr. Matthews also testified that Mr. Hughes wrote a draft of her expert witness statement that was entered into evidence for this hearing, which she edited (September 22, 2022 tr at 1822-1823). Dr. Matthews has never spoken to Dr. Mitchell, the statistician that help build FST, the scientists at OCME that validated FST, the forensic biologists at OCME who used FST in actual casework or anyone from the district attorney's office (September 22, 2022 tr at 1828-1829).
(3) Dr. Angela van Daal
Dr. Angela van Daal has a Bachelor of Science from the University of Adelaide and a PhD from Macquarie University in Sydney, Australia (November 3, 2022 tr at 1992). Her PhD thesis was titled, "DNA Methylation in Marsupial X chromosome Inactivation," where she was looking at the difference between the gene active confirmation of DNA and inactive confirmation of DNA (November 3, 2022 tr at 1993). Dr. van Daal's PhD was in molecular genetics (November 3, 2022 tr at 1993-1994). Dr. van Daal worked as a post-doctoral fellow for three years at Washington University in St. Louis in the area of molecular genetics and molecular biology, specifically, looking at gene transcription and activity (November 3, 2022 tr at 1992-1993). The post-doctoral work involved electrophoresis and she performed electrophoresis daily at times (November 3, 2022 tr at 1993). Dr. van Daal then worked at the University of Adelaide for about 18 months before working at the South Australian Forensic Science Center. Id. She started working at the South Australian Forensic Science Center in 1991 with the goal of implementing DNA typing to the courts of South Australia (November 3, 2022 tr at 1994). In 1991, Dr. van Daal implemented at the crime lab a PCR-based method using a PCR test called HLA-DQ Alpha for criminal casework (November 3, 2022 tr at 1994-1995). Dr. van Daal then developed other PCR markers, which were called D1S80, Apo-B and D17SZ (November 3, 2022 tr at 1995). The police were the primary customers for the DNA lab in South Australia. Id.
The American Society of Crime Lab Directors Laboratory Accreditation Board ("ASCLD/LAB") is the American laboratory accreditation body, and Dr. van Daal was an accredited inspector for ASCLD/LAB (November 3, 2022 tr at 1995-1996). Dr. van Daal, as part of a team, inspected laboratories in Australia, New Zealand and the U.S. (November 3, 2022 tr at 1996). In the U.S., Dr. van Daal inspected the Orange County, California, Crime Lab System, and was on the audit team for the FBI Laboratory. Id. Dr. van Daal is currently involved in reviewing a number of cases at the Department of Forensic Science Laboratory in D.C. as the lab lost its accreditation last year (November 3, 2022 tr at 1997).
National Association of Testing Authorities ("NATA") is the body equivalent in Australia to ASCLD/LAB. Id. Dr. van Daal worked with NATA to develop the first set of standards for forensic laboratories, particularly in connection to DNA (November 3, 2022 tr at 1997-1998). Dr. van Daal was a member of the Singapore Advisory Board where the Singapore Forensic Police Lab consults for any issues or concerns with their casework (November 3, 2022 tr at 1998). Dr. van Daal was involved in developmental validations of the systems that were used in the South Australian Forensic Lab and has been involved in the review of many validation studies since then (November 3, 2022 tr at 2000).
Around 2007, Dr. van Daal visited OCME's Department of Forensic Biology shortly after the new labs opened to give a talk about her research at the time (November 3, 2022 tr at 2000-2001). At the time, Dr. van Daal was given a tour of the laboratory facilities (November 3, 2022 tr at 2001). Dr. van Daal is familiar with OCME's low copy number testing methodology and have reviewed the validation studies when she was hired by the Legal Aid Society in the Collins /Peaks Frye hearing. Id. Dr. van Daal testified at that Frye hearing as well as at a Daubert hearing in New York (November 3, 2022 tr at 2002). Dr. van Daal has testified in court hundreds of times (November 3, 2022 tr at 2005). When Dr. van Daal first went into forensic science working on behalf of the police, she testified mostly for the prosecution. Id. However, since she has left the South Australian Forensic Science Lab and went into academia, she has been working with defense lawyers. Id. In April of 2022, Dr. van Daal did testify on behalf of the US Attorney's Office (November 3, 2022 tr at 2006).
In 2021, Dr. van Daal was hired by Dr. Budowle as a Research Scientist 4 at the Center for Human Identification in the University of North Texas (November 4, 2022 tr at 2078-2079). The center handled both missing person cases and criminal cases (November 4, 2022 tr at 2080).
During her career, Dr. van Daal has been awarded grants including a grant from the National Institute of Justice ("NIJ") and from the Technical Scientific Working Group, which was a subset of DOJ (November 3, 2022 tr at 2003). Dr. van Daal has also been a reviewer for grant bodies. Id.
Dr. van Daal was qualified as an expert in the areas of molecular biology, PCR DNA testing and forensic DNA analysis (November 3, 2022 tr at 2004).
Dr. van Daal stated that the main goal of validation is to establish the limits within which results are robust and reliable (November 3, 2022 tr at 2017-2018). However, Dr. van Daal testified that OCME has done none of that for LCN testing. (November 3, 2022 tr at 2018). Dr. van Daal testified that in the low copy validation, OCME did not perform any studies involving mixtures of more than two contributors and did not perform any mixture studies of two contributors lower than 25 picograms (November 3, 2022 tr at 2037). For the OCME validation studies, two kinds of mixtures were used: pristine buccal swabs and swabs from touch samples such as computer keyboards or ID badges (November 3, 2022 tr at 2038). Of the roughly 140 touch samples, only seven were actually mixtures and of those seven, none of them were less than a hundred picograms. Id. Therefore, in Dr. van Daal's opinion, OCME did not do a mixture study validation as required (November 3, 2022 tr at 2038-2039).
In addition, Dr. van Daal testified that for example, with STR DNA profiling, the levels to which a stutter occurs are known to be within a certain range, normally less than 10 percent. Id. Therefore, a threshold can be put in place whereby any peak that is one repeat smaller and less than 10 percent of the peak height of the allele peak is deemed to be stutter. Id. However, Dr. van Daal testified that with LCN testing, that stutter level is gone, and the stutter peak can be substantially larger than the true allele peak (November 3, 2022 tr at 2018-2019). Furthermore, she testified that as the kits get more sensitive and can type smaller amounts of DNA, contamination becomes one of the concerns (November 3, 2022 tr at 2019-2020). Dr. van Daal testified that LCN just exacerbates these problems enormously (November 3, 2022 tr at 2020). Dr. van Daal testified that the amount of DNA tested in this case was less than 20 picograms (November 3, 2022 tr at 2021). If it is less than 20 picograms in the PCR reaction, that means quantitation value would have been less than five picograms per microliter (November 3, 2022 tr at 2021-2022).
Dr. van Daal was consulted by the chair of the Texas Forensic Science Commission to review the Austin Police Department ("APD") protocols and methods as well as their validations surrounding their mixture DNA interpretation (November 3, 2022 tr at 2023). The APD DNA lab used the estimated quantity of input DNA into the amplification reaction as the primary method for determining potential stochastic effects, such as, allele drop-out and did not account for allele stacking, sharing, stutter contribution, etc. (November 3, 2022 tr at 2027-2028). Dr. van Daal testified that FST similarly used quantitation threshold for determining allele drop-out, and that this is not scientifically sound (November 3, 2022 tr at 2028). In addition, as to both APD DNA lab and OCME, Dr. van Daal testified that the quantity of DNA is not an appropriate metric to assess potential stochastic effects that occur during amplification for DNA mixture evidence (November 3, 2022 tr at 2029-2031). Mixture analysis is notoriously difficult, and the probabilistic genotyping has evolved over time because the community has such trouble interpreting mixtures (November 3, 2022 tr at 2032-2033). Due to various factors, determining the number of mixtures is not as simple as it seems, and Dr. van Daal testified that because of allele sharing, what appeared as two or three-person mixtures are often, in fact, three or four-person mixtures (November 3, 2022 tr at 2033).
Around 2010, Dr. van Daal gave a presentation at a conference commonly known as ISHI or the Promega Conference on the limitations of LCN and why she felt that it was not accepted by the forensic scientific community (November 3, 2022 tr at 2019-2040). Dr. Butler and Dr. Budowle were also among the panel members (November 3, 2022 tr at 2040). OCME was the only criminal laboratory in the U.S. that divided a very small amount of DNA into three even smaller amounts of DNA (November 3, 2022 tr at 2043). One of Dr. van Daal's PhD student did research on this question and found that the results were better when it was not divided. Id. Dr. van Daal testified that dividing the sample decreases the chance of having an allele drop-in be called as a true allele, but there is no doubt that the number of alleles and the quality of the profile was substantially worsened (November 3, 2022 tr at 2044).
In 2009, Dr. van Daal co-authored an article in the Croatian Medical Journal titled, "Validity of Low Copy Number Typing and Applications to Forensic Science" with Dr. Budowle and Dr. Eisenberg (November 4, 2022 tr at 2049). The issue of contamination was a concern by forensic science labs from the inception of PCR DNA typing. Id. However, for low copy mixture samples of less than 20 picograms, which means there is less than equivalent of one cell from one of the contributors, contamination is an issue for all sorts of reasons (November 4, 2022 tr at 2051). Dr. van Daal testified that OCME did mixture studies as part of their validations as required and the instances of contamination were significant (November 4, 2022 tr at 2053). Due to the sensitivity of touch samples, LCN profile may not be relevant to a case. Id. In the article, Dr. van Daal wrote that, "proper evidence collection and handling protocols have not been well-established or at least communicated." (November 4, 2022 tr at 2054). Dr. van Daal stated that improving crime scene collection methodology and educating crime scene investigation personnel are very important (November 4, 2022 tr at 2055).
As part of the OCME validation, they looked at items of two, three or four-person mixtures, where they had either cleaned or not cleaned the item prior to having two, three or four persons handling them, and then they quantified the amount of DNA on those items. (November 4, 2022 tr at 2056). Then, based on that quantification, the sample was either amplified under their LCN method or the normal STR method. Id . There were 29 instances where they saw number of alleles that should not have been seen based on what was known about the people who touched the items (November 4, 2022 tr at 2058). These instances included cases where the alleles were seen in two of the three PCR amplifications, which is more problematic because OCME would call these as an allele (November 4, 2022 tr at 2057, 2065). Dr. van Daal testified that this shows the issue of contamination with these kinds of samples. (November 4, 2022 tr at 2058).
Dr. van Daal testified that stochastic effects have been well-studied within the forensic science community and were well-understood aspect of DNA testing (November 4, 2022 tr at 2112-2113). To a certain degree, Dr. van Daal agreed that stochastic effects, such as stutter and contamination, exist in all types of DNA testing (November 4, 2022 tr at 2113). Dr. van Daal also agreed that OCME has incorporated protocols, thereby acknowledging that there is a higher rate of stochastic effect when it comes to LCN testing (November 4, 2022 tr at 2114).
(4) Nathaniel Adams
Mr. Nathaniel Adams works at Forensic Bioinformatic Services as a systems engineer (November 17, 2022 tr at 2158-2159). He assists with forensic biology casework reviews involving re-analysis of electronic data files, examinations of bench notes for case files, evaluations of laboratory operating procedures as well as assistance with the development, maintenance or review of various programs used in the field (November 17, 2022 tr at 2159). Mr. Adams has worked on hundreds of cases, and he works predominately on criminal cases (November 17, 2022 tr at 2160). Most of his customers are defense attorneys. Id. Mr. Adams has a Bachelor of Science in computer science and has taken a variety of courses in computing and bioinformatics. Id. Mr. Adams has taken statistics for engineers, as well as a number of courses involving analytics, such as data mining and machine learn, which are predominantly information theory and statistics based. Id. Mr. Adams has worked for Forensic Bioinformatics for 10 years (November 17, 2022 tr at 2161). Mr. Adams has completed course work but not the thesis requirement for a master's degree in computer science. Id.
Mr. Adams created and modified a number of programs that assist in the review of forensic STR genotyping results, like those used in forensic DNA laboratories and criminal investigations (November 17, 2022 tr at 2163). He has also created several programs for the purpose of simulating genotypes or evaluating genotypes in a research context that they have presented at several scientific conferences. Id. Mr. Adams has experience in software maintenance, such as fixing bugs, developing additional features, and optimizing a program to make it run faster or cleaner (November 17, 2022 tr at 2164-2165). Mr. Adams was able to run FST (November 17, 2022 tr at 2167).
Mr. Adams was deemed an expert in the area of software engineering, but not in the field of forensic DNA software (November 17, 2022 tr at 2219). Software development starts with the requirement stage, which in the forensic DNA context would involve biologist and statisticians describing what they are intending the program to do, and that would serve as a foundation or the basis for the technical software procedures (November 17, 2022 tr at 2224-2225). The specification stage is an intermediate stage where the requirements are more formally stated in a manner that can serve as a testable criterion, where they can demonstrate conclusively that a system does or does not exhibit an intended behavior (November 17, 2022 tr at 2225). The design stage is moving from the idea of specifications or specific behaviors that the program must exhibit into more of an architecture or a high-level description of how the program will be intended to be constructed. Id. The implementation stage is the classic construction of a software program where the bulk of the programming would take place (November 17, 2022 tr at 2225-2226). The testing stage is compiling the software execution tests to show that the actual operating program adheres to those specifications previously defined (November 17, 2022 tr at 2226). The maintenance stage is the general upkeep of the software program over the course of its life. Id.
Mr. Adams testified that he has reviewed several articles published on FST, the FST validation study, the FST program itself and the OCME operating manuals (November 17, 2022 tr at 2227-2228). Mr. Adams stated that there are certain documents one would expect to see with software development that he did not get access to for FST, such as centralized requirement specifications (November 17, 2022 tr at 2229-2230). In addition, centralized test plans were incomplete or did not exist for some components, and verification activities were under-documented or did not exist (November 17, 2022 tr at 2230). Furthermore, maintaining the concept of traceability so that a requirement as defined as an intended behavior of FST can be traced through translation to specifications was not addressed. Id. Requirement documents are central documents describing the intended behaviors of the developers that the program is supposed to exhibit. Id. For FST, Mr. Adams has seen flow charts but has not seen an essential and exhaustive list of necessary behaviors (November 17, 2022 tr at 2231). Specification documents are a collection of specific testable behaviors, which are intended to be a description of where software limitations exist (November 17, 2022 tr at 2231-2232). For FST, the specification documents do not exist in a manner Mr. Adams would expect as a software engineer (November 17, 2022 tr at 2232). In the context of software engineering, verification is the demonstration that the system built is in fact the system you had intended to build. Id . IEEE Standard 1012 is the standard for verification and validation of systems software and hardware (November 17, 2022 tr at 2233). To Mr. Adam's knowledge, there has not been any verification of FST pursuant to a standard like 1012 (November 17, 2022 tr at 2234). Mr. Adams testified that the verification documented in FST validation studies is limited and the fact that not every locus was ever tested is concerning (November 17, 2022 tr at 2236-2237). Mr. Adams testified during cross-examination that lack of a specification document is not unique to FST and it is an endemic problem in the field (November 18, 2022 tr at 2321).
The Software Engineering Body of Knowledge ("SWEBOK") is published by IEEE to describe a foundational knowledge of computing software engineering concepts and is regarded in the software engineering field as authoritative (November 17, 2022 tr at 2237-2238). Mr. Adams testified that for FST, the expected outcomes of the many test efforts were not stated as required by SWEBOK, and therefore, it was practically impossible to determine exactly when FST is operating as expected (November 17, 2022 tr at 2239). In addition, Mr. Adams testified that ambiguity or gaps in the description of intended behaviors of FST have led to confusion and made it difficult to arrive at a conclusion whether the system did or did not objectively behave as described (November 17, 2022 tr at 2240-2241). If there is ambiguity in the requirements, the verification process can be subjective, and two people looking at a single system behavior could disagree on whether it is acceptable or not (November 17, 2022 tr at 2241). Mr. Adams testified that there are certain behaviors of FST that has not been described in the validation study and certain behavior modifications made to the system that were not documented, which are problematic (November 17, 2022 tr at 2242). For STRmix, the software requirements specification document is listed on their website (November 17, 2022 tr at 2243).
On cross-examination, Mr. Adams testified that there are no oversight-body for the forensic DNA community that mandated that probabilistic genotyping software must comply with the IEEE standards (November 18, 2022 tr at 2310-2311). However, Mr. Adams also testified that there are not many regulatory bodies for forensic science in general that could make such a requirement (November 18, 2022 tr at 2311). It was also pointed out during cross-examination that the IEEE 1012 Standard states that, "use of IEEE standard is wholly voluntary" (November 18, 2022 tr at2313-2314). It was Mr. Adams understanding that the New York State Forensic Science Commission does not require probabilistic genotyping programs to meet the IEEE Standard 1012 (November 18, 2022 tr at 2318).
Based on disclosures made by OCME in connection to a previous federal case around 2016 or 2017, Mr. Adams learned that there were changes made to FST program that had not been previously disclosed (November 17, 2022 tr at 2246-2247). Mr. Adams testified that several of the modifications made affected the ultimate output of the likelihood ratio, which would be considered a major modification (November 17, 2022 tr at 2247). If the system's core feature was functionally affected by the change, there should have been a re-evaluation of the whole system (November 17, 2022 tr at 2248). However, in Mr. Adam's opinion, for FST, this process lacked in substance and clarity. Id. Version control is tracking the changes to the source code itself, such as comments as to why they were made, who made them and how they were made. (November 17, 2022 tr at 2248-2249). Mr. Adams has not seen any version control logs for FST (November 17, 2022 tr at 2250). If the original version of FST that was validated and evaluated does not exist, one would not be able to review the system as it occurred if you had a case that used the earlier versions of FST (November 17, 2022 tr at 225102251). It was brought out during cross-examination that non-disclosure of source code is not unique to FST, and other developers of probabilistic genotyping programs have been hesitant to release the source code as well (November 18, 2022 tr at 2321-2322).
Mr. Adams testified that the function of FST that has been referred to as the locus-dropping function or the allele-cap function has not been described in any published literature or in the validation study (November 17, 2022 tr at 2252-2253). Mr. Adams was aware that this function was introduced after the validation (November 17, 2022 tr at 2253). Mr. Adams testified that this function was introduced to avoid a mathematically impossible likelihood ratios from occurring and stated that it was concerning that this was not identified as a risk before the validation study (November 17, 2022 tr at 2254). FST calculates a likelihood ratio at each locus for each sub-population, and then it will combine those likelihood ratios across all those sub-populations (November 17, 2022 tr at 2254-2255). If the allele-cap function is invoked for one and only one population, the function is triggered and applied to the likelihood ratios for all sub-populations without taking into consideration whether it increases or decrease the final likelihood ratio and there is no indication to a user of FST that this had occurred (November 17, 2022 tr at 2254-2256). Mr. Adams testified that as a software engineer, the fact that the system was modified in a manner where its calculations could be affected after it was validated is significant and concerning (November 17, 2022 tr at 2259). Mr. Adams testified that as there was no change logs or documentation of the programmers or the system, he cannot perform a review to evaluate the changes (November 17, 2022 tr at 2260). Other than OCME emails, Mr. Adams has not seen any explanation regarding the 0.97 threshold in published articles or the validation study documents (November 18, 2022 tr at 2272). To Mr. Adam's knowledge, FST has been evaluated to a limited extent only by OCME after the 0.97 allele frequency change (November 18, 2022 tr at 2272-2273).
FST has programmed into it specified drop-out rates for specific template quantities (November 18, 2022 tr at 2281). Therefore, at a specific locus, based on the number of contributors, whether it is deducible or not and how many picograms the sample is, there is a set drop-out rate (November 18, 2022 tr at 2280-2282). If the sample size is in a picogram in between the two specified in FST, interpolation function will be performed (November 18, 2022 tr at 2282). For the example of Locus D7S820, two contributors, non-deducible sample, the drop-out rates of 6.25 picograms, 12.5 picograms and 25 picograms are all 0.41, which is referred to as a floor in computer science (November 18, 2022 tr at 2283-2284). Apart from the FST source code, Mr. Adams has not seen this floor discussed, including the FST validation documents (November 18, 2022 tr at 2284). In addition, when running FST, the user is not informed of this floor. Id. Unit testing is an approach to software testing that attempts to isolate very specific discreet behaviors of the software for testing purposes in an attempt to isolate them and rigorously test each of those functional units (November 18, 2022 tr at 2290). It is typically a precursor to more complex testing in a hierarchal approach. Id. Mr. Adams has not seen automated unit testing in the FST code, which would commonly be developed in parallel to the actual functional code (November 18, 2022 tr at 2290-2291). Mr. Adams did see testing of likelihood ratio calculations performed on a locus-by-locus basis, but they were in isolation with a few dozen total test cases, as well as the drop-out rate interpolation functions (November 18, 2022 tr at 2291). Mr. Adams testified that he did not see any descriptions by the developers to claim or justify why they selected the test cases they did and a conclusion that it was sufficient. Id. In addition, certain undesirable behaviors of FST, such as locus-dropping features, were not identified at the time of those original tests indicating that the tests were either insufficient or that the understanding of what the developers had intended of the system was lacking (November 18, 2022 tr at 2291-2292).
As a software engineer, in Mr. Adams’ opinion, the coding used in FST has some deviations from the widely recognized coding practices (November 18, 2022 tr at 2292-2293). Mr. Adam's main concern with the FST coding itself is that it lacks the principle of traceability, meaning it lacks the ability to associate a particular segment of code as it exists in the source code with an intended software requirement and specification (November 18, 2022 tr at 2293). Mr. Adams testified that he has examined multiple probabilistic genotyping systems and he has not seen quantitation-based drop-out rates or a locus-dropping function other than with FST (November 18, 2022 tr at 2293-2294).
Conclusions of Law
The long-recognized " Frye test applied to the admissibility of novel scientific evidence is whether the accepted techniques, when properly performed, generate results accepted as reliable within the scientific community generally." People v. Wakefield , 38 N.Y.3d 367, 380, 174 N.Y.S.3d 312, 195 N.E.3d 19 (2022), quoting People v. Wesley , 83 N.Y.2d 417, 422, 611 N.Y.S.2d 97, 633 N.E.2d 451 (1994) (internal quotation marks omitted). "General acceptance by the relevant science community, however, does not require that the procedure be unanimously indorsed." Id ., quoting People v. Middleton , 54 N.Y.2d 42, 49, 444 N.Y.S.2d 581, 429 N.E.2d 100 (1981) (internal quotation marks omitted). However, merely showing that the "expert's opinion has some support is not sufficient to establish general acceptance in the relevant scientific community." People v. Williams , 35 N.Y.3d 24, 37, 124 N.Y.S.3d 593, 147 N.E.3d 1131 (2020) (internal quotation marks omitted). The party that wishes to introduce the disputed evidence "must show consensus in the scientific community as to [the methodology's] reliability." Id. , quoting Sean R. v. BMW of N. Am., LLC , 26 N.Y.3d 801, 809, 28 N.Y.S.3d 656, 48 N.E.3d 937 (2016).
In 2020, the Court of Appeals held that admitting evidence of LCN testing and FST without a Frye hearing was an abuse of discretion as a matter of law. Williams , 35 N.Y.3d at 38, 40, 124 N.Y.S.3d 593, 147 N.E.3d 1131. Regarding LCN evidence, in 2010, the trial court in People v. Megnath , 27 Misc.3d 405, 898 N.Y.S.2d 408 (Sup. Ct., Queens Co., 2010), following a Frye hearing, held that the People had met their burden of establishing that LCN DNA testing as conducted by OCME was generally accepted as reliable in the forensic scientific community under the Frye standard, and that it was not a novel scientific procedure within the scope of the Frye doctrine. See People v. Megnath , 27 Misc.3d 405, 898 N.Y.S.2d 408 (2010). However, the Court of Appeals found fault with the conclusion in Megnath . It held that Megnath ’s analysis of LCN testing was based solely on the court's review of "what was OCME's own, internal support for its process, as well as upon evidence reflecting that such methodology had been used worldwide for over 10 years and was currently used in many other countries." Id. at 38, 124 N.Y.S.3d 593, 147 N.E.3d 1131 (internal quotation marks omitted). As such, the Court of Appeals found that the analysis underlying the Megnath ruling "did not adequately assess whether OCME's LCN testing was generally accepted within the relevant scientific community." Id. at 39, 124 N.Y.S.3d 593, 147 N.E.3d 1131. Therefore, the approximately 10 trial court decisions that admitted LCN evidence relying on the Megnath ruling also did not adequately assess whether OCME's LCN testing was generally accepted within the relevant scientific community. Id. at 38-39, 124 N.Y.S.3d 593, 147 N.E.3d 1131. However, the Court of Appeals stopped short of ruling that LCN testing does not meet the Frye standard.
Similarly, as to FST evidence, the Court of Appeals found that the court in People v. Rodriguez (Sup. Ct., NY Co., 2013) (not officially reported), after conducting a Frye hearing, based its approval of FST solely on "internal validation by OCME and approval of the tool by the DNA Subcommittee of the New York State Commission on Forensic Science," and that did not sufficiently meet the Frye standard. Id. at 41, 124 N.Y.S.3d 593, 147 N.E.3d 1131. In addition, the Court of Appeals found that the court in People v. Garcia , 39 Misc.3d 482, 963 N.Y.S.2d 517 (Sup. Ct., Bronx Co., 2013) admitted FST without a Frye hearing "based on what essentially was the ‘aggregation’ theory advanced by the People," and that also did not adequately evaluate FST under the Frye standard. Id. at 41-42, 124 N.Y.S.3d 593, 147 N.E.3d 1131. But, even as to FST, the Court of Appeals again stopped short of ruling that FST does not meet the standard set under Frye . The Court of Appeals has not stated that LCN testing and FST cannot meet the Frye standard. However, it also held that the trial court cannot simply rely on OCME's own validation and tests alone.
Therefore, pursuant to the Court of Appeals’ holding in Williams , this court conducted a Frye hearing on both LCN testing and FST. As far as this court is aware of, since the Williams decision, there has not been any New York State Appellate Court decision on this issue. In addition, there has not been any trial court decision on the issue of admissibility of LCN testing or FST evidence.
This court did review People v. Collins , 49 Misc.3d 595, 15 N.Y.S.3d 564 (Sup. Ct., Kings Co., 2015) where, after a Frye hearing, the court held that the evidence derived both from high sensitivity analysis and from the FST were not yet proven to be admissible under the Frye test. See People v. Collins , 49 Misc.3d 595, 15 N.Y.S.3d 564 (2015). In addition, this court reviewed a New Jersey Appellate Division decision of State v. Rochat , while not being a controlling authority, was decided after Williams , where the court reversed the trial court's admission of FST and LCN evidence, which was followed by a Frye hearing, and found that the People failed to establish that LCN or FST met the Frye standard. See State v. Rochat 470 N.J.Super. 392, 269 A.3d 1177 (2022). In Rochat , the court stated that because this was a criminal matter, the State's burden is to "clearly establish" that the challenged technique is "widely, but perhaps not unanimously, accepted as reliable" by the relevant scientific community. Rochat , 470 N.J.Super. at 441-442, 269 A.3d 1177. However, this New Jersey ruling seems to require more than the traditional Frye standard by stating that the People are required to "clearly establish" that the technique is "widely" accepted as reliable.
This court has also reviewed a number of federal cases where the admissibility of evidence of LCN testing or FST was considered under the Daubert standard ( Daubert v. Merrell Dow Pharmaceuticals, Inc. , 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469 [1993] ). See U.S. v. Wilbern , 2022 WL 10225144 (U.S. Ct. of Appeals, Second Circuit, 2022) (finding that the district court's conclusion that LCN DNA testing as performed by OCME was generally accepted in the relevant scientific community was not erroneous); United States v. Jones , 2018 WL 2684101 (U.S. District Ct., S.D.N.Y., 2018) (finding that OCME's FST is admissible under the Daubert standard); U.S. v. Morgan , 53 F.Supp.3d 732 (2014) (finding that OCME's LCN DNA test results are admissible under the Daubert standard). In Daubert , the United States Supreme Court held that, at least in Federal courts, the Frye test had been superseded by the adoption of the Federal Rules of Evidence, particularly rule 702, which allows the court to permit testimony concerning scientific or technical evidence if such evidence will aid the fact finder in understanding the evidence or determining a fact at issue ( Fed. Rules Evid., rule 702 ). See Daubert v. Merrell Dow Pharmaceuticals, Inc. , 509 U.S. 579, 113 S.Ct. 2786, 125 L.Ed.2d 469. As such, the United States Supreme Court held that the "general acceptance" standard should not apply in federal cases. Id. at 589, 113 S.Ct. 2786. While this court is aware that the standards under Frye and Daubert are different, it found the cases to be relevant to the extent that they show how courts have treated LCN and FST evidence.
The following is this court's conclusions of law following the Frye hearing.
Low Copy Number Testing
OCME developed LCN DNA testing in order to obtain DNA profiles even when there is a small amount of DNA. LCN comes into play when the sample is below 100 picograms. The basic steps of standard DNA testing are: (1) DNA extraction; (2) quantitation; (3) 28-cycle amplification and (4) analysis by running the DNA through capillary electrophoresis to produce the DNA profile. For LCN testing, OCME modified this testing procedure to increase the sensitivity, and modified the interpretation protocol to account for that increase in sensitivity. Specifically, the amplification cycles were increased to 31 cycles to essentially make more copies of the DNA segments to be analyzed. In addition to the three extra cycles, OCME did the amplification three times ("triplicate amps"), and the protocols require that the allele needed to be seen at least two out of the three to be assigned to that profile.
All the experts who testified regarding LCN testing, both for the People and the defense, agree that high sensitivity analysis increase stochastic effects, which can make it more difficult to interpret the results. This was well known even before OCME began its LCN validation process. Stochastic effects are sampling errors that take place during the amplification step, which can occur with high template samples as well but are more common with low template samples. The stochastic effects are stutter products, allelic drop-in, allelic drop-out and peak imbalance. During each PCR cycle during a DNA analysis, the amount of DNA is being doubled in a sense. If we think of DNA as a fragment of substance, if there is enough fragment in there when the copying process begins, it grabs a decent amount of DNA that can be copied. However, if the sample is a low amount of DNA, and a fragment is not grabbed until the 10th cycle or the 20th cycle, by the end, there is going to be much less of that fragment or that allele than the other alleles, which is how you get peak imbalance. If it happens to only grab one or two fragments or does not grab any fragments in the first couple of cycles, then there is nothing to copy and at the end of the PCR process that fragment is not represented, which is called drop-out. Drop-in is when you have a small fragment of DNA, whether it is from contamination or from a very, very minor contributor, that gets into the process somewhere during the PCR process. If during the amplification procedure, DNA polymerase, which is an enzyme that facilitates the amplification, ends up creating small byproduct alleles at a much lower percentage, this is called stutter. Dr. O'Connor testified that the three extra cycles for LCN testing are designed to minimize drop-out, and the triplicate amps is performed in an effort to avoid counting drop-in alleles.
In order to use LCN testing in criminal casework, OCME was required to undergo validation studies. Dr. O'Connor testified that OCME created the Low Copy Number Section at the laboratory around 2002 and started using LCN testing in 2006 after approximately four years of validation studies. OCME had a group of five or six scientists assigned to the validation full-time, and the validation consisted of over 800 samples with sensitivity studies from 100 picograms to 6.25 picograms. Dr. O'Connor testified that during the validation studies, the scientists were able to get results from all of the sample sizes from 150 picograms to 6.25 picograms. Dr. O'Connor testified that when LCN was brought online at OCME, LCN procedure was performed in different countries, and OCME had modeled some of their procedures from the United Kingdom's Forensic Science Service's ("FSS") LCN procedure. In 2005, once the internal validation of LCN was completed, OCME presented the validation studies and the procedures to the New York State Forensic Science Commission's DNA Subcommittee for its approval. The DNA Subcommittee is responsible for reviewing and assessing all DNA methodologies that are brought before it and they set the standard for DNA labs in the New York State. The Subcommittee is made up of seven members, and it is mandated that the members include individuals in the disciplines of molecular biology, population genetics, laboratory standards, quality assurance regulation and monitoring and forensic science. These individuals must be recognized in the scientific community based on the categories, and they are from all over the world. The Subcommittee makes binding recommendations to the Commission, which is the body responsible for setting the standards for the public laboratories in New York State.
The DNA Subcommittee held meetings regarding LCN on May 17, 2005 and September 9, 2005. In addition to submitting the validation studies, OCME answered questions from the Subcommittee members at the meetings. On October 6, 2005, the DNA Subcommittee approved OCME's validation of LCN testing. Subsequently, on December 6, 2005, the Forensic Commission discussed OCME's use of additional cycles in LCN testing. On December 15, 2005, the Commission issued a letter approving the increased cycle number for LCN DNA testing. In early 2006, OCME went online with LCN DNA testing. On August 22, 2006, OCME was before the DNA Subcommittee again to discuss proficiency testing of analysts that were trained in performing LCN testing. Following this meeting, the Subcommittee found that the responses of OCME were satisfactory regarding the concerns expressed by the Forensic Commission.
Then, in 2014, the Commission requested additional review of portions of the LCN validation, specifically, the lower limits of LCN testing, asking whether the OCME's procedures had changed since the initial validation and if there were changes, whether there were validations to support them. On June 2, 2014, the DNA Subcommittee issued a letter stating that the Subcommittee unanimously found that scientifically, there was no lower limit in the quantity of DNA that must be present before LCN testing could be employed. In order to answer the remaining questions, in August of 2014, the Subcommittee visited OCME on two separate dates. On September 5, 2014, the DNA Subcommittee held a meeting and voted that there had been no substantive changes made to the LCN DNA procedure since its approval in 2005.
In September of 2017, the Legal Aid Society and the Federal Defender Services made an official complaint to the New York State Inspector General's office against OCME claiming that OCME was engaging in negligent conduct and malfeasance with the way they were performing DNA analysis, specifically referring to the use of LCN and FST. This complaint was referred to the Forensic Science Commission, and in response, the DNA Subcommittee reviewed LCN and FST again. On December 4, 2017, the DNA Subcommittee wrote a letter to the Commission stating that there was no significant malfunction as asserted in the letter to the Inspector General. The letter stated that based on OCME's validations, LCN could be used in potentially identifying a major contributor to a DNA mixture and can be used with 31 cycles.
This court is not equating the approval of LCN testing by the Forensic Science Commission and its DNA Subcommittee with general acceptance in the relevant scientific community. However, the fact that a group of distinguished experts in various aspects of DNA analysis tasked with approving DNA methodologies and setting the standard for DNA labs in the New York State approved LCN is certainly relevant and constitute some evidence of general acceptance. See People v. Williams , 35 N.Y.3d 24, 41, 124 N.Y.S.3d 593, 147 N.E.3d 1131. As shown above, the Commission and the DNA Subcommittee do not just rubber stamp their approval on new methodologies. Natasha Harvin-Locklear, Esq., who is the special counsel to the Commission and the DNA Subcommittee, testified that she has seen the Commission or the DNA Subcommittee delay action on a technique, request more information from a lab before making a decision or limit the use of the proposed technology. In addition, it was pointed out by the People that the DNA Subcommittee members change over time, and the individual members were different between 2005 and 2017. Therefore, it was not the same seven members that approved LCN each time LCN was brought before the Subcommittee.
This court now turns to the expert testimony and the scientific writings on LCN testing introduced at the Frye hearing. In 2001, even before OCME started validating LCN, Dr. Bruce Budowle, wrote an article discussing the considerations and cautions that a laboratory should use when performing LCN typing, which Dr. O'Connor testified was one of the articles OCME used as a guidance. In addition, Dr. Peter Gill published many articles on forensic DNA and LCN testing, including the consensus profiles with replicate amplifications, which OCME relied on for the validation and interpretation of LCN procedures. These literatures on LCN by world-renowned scientists tend to suggest to this court that LCN was not a novel technique when OCME decided to use it although there might have been and still are some disagreements as to the specific protocols and requirements.
In 2009, Dr. Bruce Budowle, Dr. Arthur Eisenberg and Dr. Angela van Daal co-authored an article in the Forensic Science International titled, "Low Copy Number Typing has yet to Achieve ‘General Acceptance.’ " The article stated that OCME had not implemented an interpretation protocol consistent with their validation studies, and that it was imperative that protocols were publicly available for public review. The article concluded that the approaches practiced by OCME were not ones they would endorse as they were flawed. Dr. O'Connor testified that at the time of this article, OCME's protocols were not online, but sometime in 2009, OCME did publish the LCN validation summary, which included the protocols used by OCME for casework. It was pointed out by the defense that the entire validation summary, which was 603 pages, was never published in a journal.
In addition to OCME responding to the Budowle et al. article, Dr. John Buckleton and Dr. Peter Gill wrote an article in response in the same journal in 2009, stating that the Budowle et al. article presented views that were inadequately precise, demonstrating a lack of appreciation of the underlining principles and were not align with broader scientific opinion. It was further stated in the article that LCN DNA testing as performed by OCME was generally accepted as reliable in the forensic scientific community. In Rochat , the New Jersey Appellate Court stated that the only published article the State's expert had referred to in support of LCN testing was an article written by OCME personnel. Rochat , 470 N.J.Super. at 449, 269 A.3d 1177. However, here, the People introduced multiple articles written by Dr. Buckleton and Dr. Gill in support of OCME's LCN testing.
In 2010, Dr. Budowle and Dr. van Daal responded to Dr. Buckleton and Dr. Gill's article stating that the forensic science community did not know what the practices of LCN laboratories were and whether they were valid and reliable. The article also criticized LCN, emphasizing the deleterious effects of the increased stochastic variants. They again advocated for openness and urged laboratories to provide their protocols.
In 2010, Dr. Buckleton and Dr. Gill once again responded in an article stating that they strongly disagreed with Budowle et al. that stochastic effects were confined to low copy DNA, and further stated that they clearly very much applied to every kind of profiling method. The article further stated that stochastic effects simply get bigger as you increase the sensitivity, and by probabilistically modeling these effects, one can ensure that one draws reliable inferences from the data. In hindsight, Dr. Buckleton felt that Dr. Budowle had not come to grips with the elegance of the interpretation strategy and the highly beneficial aspect it brought to the interpretation of these types of data. The article stated that both the technology and the interpretation science of low template DNA analysis were much more advanced than been given credit in the Budowle et al. paper. As to this point, Dr. O'Connor testified that with LCN, the stochastic effects do increase, and therefore, you need to account for it and be cautious, but he agreed with Dr. Buckleton that these things can happen with all types of DNA testing, not just LCN.
Subsequently, Dr. Budowle, Dr. van Daal and Dr. Ranajit Chakraborty published an Authors’ Response in the Journal of Forensic Sciences stating that addressing the probability of allele drop-out was a serious problem that had yet to be adequately developed for LCN typing protocols. It should be noted that Dr. Chakraborty was a member of the New York State DNA Subcommittee when LCN was approved but after leaving the Subcommittee had changed his position on LCN and criticized it.
Dr. Mitchell Holland did not testify at this hearing, but a PowerPoint he had prepared in 2009 for his testimony at the Frye hearing in the case of People v. Megnath was entered into evidence. Dr. Holland had testified for the People. Dr. Holland stated in the PowerPoint that "[v]alidation studies have clearly illustrated that increasing the number of cycles from 28 to 31, along with the appropriate laboratory and interpretation method, produces results that can be reliably reported in criminal cases. Other studies have illustrated that cycle number can be increased to 34 cycles and still produce results that can be reliably reported." As to the increased number of PCR cycles, Dr. Buckleton testified that there is nothing special about 28 cycles, and the cycle number varies across PCR replication in forensic science. He further stated that there are user guides for testing kits that provide protocols for cycles that go beyond 28 cycles. Dr. Buckleton testified that the number of PCR cycles does not cause the stochastic effects, but simply allows us to see those effects that are occurring naturally.
Dr. Holland also stated in his PowerPoint that it is clear from the validation studies that input amounts of DNA far less than the kit recommended 0.5 to 1.25 nanograms can produce reliable results. In addition, Dr. Holland's PowerPoint stated that "any suggestion that only a few laboratories in the U.S. and/or the world are doing LCN STR analysis is a misrepresentation of reality." Dr. Holland also stated that most laboratories use some version of LCN STR techniques. However, the defense brought out that in 2016, in response to an email from Timothy Kupferschnid, Chief of Labs and Director of the Forensic Biology Department at OCME, asking for a statement that LCN was generally accepted in the scientific community, Dr. Holland answered, "In my experience with the OCME DNA testing laboratory, they have utilized a conservative approach when evaluating results from forensic samples with low amounts of DNA," but further stated that "I think it's fair to say that there isn't ‘general acceptance’ of the ‘LCN’ approach in the community, depending on how you frame things."
SWGDAM is a scientific working group on DNA analysis methods within the FBI made up of scientists and practitioners that look at DNA analysis methods and come up with guidelines and recommendations for laboratories that are performing DNA testing. SWGDAM's standards and guidelines are used by most accredited laboratories in the United States. At the time when OCME was conducting its LCN validation, SWGDAM did not have guidelines specific to LCN. However, in 2014, SWGDAM did publish guidelines for STR enhanced detection methods. The guidelines were not an endorsement by SWGDAM of that methodology, but it stated what the best practices are for labs that were performing enhanced detection methods for STR typing. It was Dr. O'Connor’s opinion that the majority, if not all, of the guidelines recommended by SWGDAM were in line with OCME's LCN protocols that were already in effect. The guidelines recommend procedures including replicate amplification and the development of a consensus profile, which were followed by OCME. Rochat , 470 N.J.Super. at 446, 269 A.3d 1177. Dr. O'Connor stated that the fact that SWGDAM came out with these guidelines showed that they were acknowledging that this is a methodology that was being used within the field. The defense emphasized and this court acknowledges that SWGDAM did not state whether it was or was not endorsing LCN. However, regardless of what SWGDAM's position is, this court must infer that, at least by 2014, LCN must have been used broadly enough in the field that SWGDAM felt the need to come up with guidelines specific to LCN.
The fact that OCME no longer uses LCN testing was also addressed at the hearing. OCME had validated LCN testing using a typing kit called Identifiler. In 2016, OCME announced that going forward it would be using an amplification kit called PowerPlex Fusion ("Fusion"). Dr. O'Connor testified that CODIS had announced that starting 2017, it would increase the number of core loci needed in order to upload to the national database from 13 to 20 locations, which Identifier was not able to do. OCME validated Fusion to be used with 29 cycles and for it to be able to test 24 locations. In addition, the minimum total DNA input validated for the Fusion kit was 37.5 picograms, which is the range that can be used with standard testing without having to employ additional cycles or an LCN interpretation process. As the threshold to employ LCN testing was 100 picograms, Fusion covered most of the LCN range. Dr. O'Connor testified that the adoption of Fusion did not invalidate the reliability of the LCN technique OCME had been using. Fusion was just a kit with a more advance technology. It was also Dr. Buckleton's opinion that laboratories have phased out the use of LCN methodology because it has been superseded by the increased sensitivity of modern multiplexes.
One of the issues Dr. van Daal raised regarding LCN was that OCME did not perform any validation studies for LCN with two-contributor DNA mixtures of below 25 picograms. Dr. Zoran Budimlija, who was a research scientist with OCME and one of the scientists that worked on LCN's initial validation, had also stated at a civil trial that in his opinion, LCN validation did not establish that LCN was reliable for DNA mixtures below 25 picograms. However, Dr. O'Connor testified that OCME's LCN validation consisted of over 800 samples with sensitivity studies from 100 picograms to 6.25 picograms. Sensitivity study is looking at different DNA sample amounts that can be used in the testing, and then extrapolating from that information to determine the interpretation procedure. Dr. O'Connor testified that during the validation study, the scientists were able to get results from all of the sample sizes from 150 picograms to 6.25 picograms. Extrapolation is looking beyond your observation and taking your data and making inferences outside of the values that your data was, because it would be unfathomable to try to test every data point from zero to infinity. Dr. O'Connor testified that extrapolation was used to set the lower bounds for LCN mixtures. Dr. O'Connor also testified that extrapolation is something that is typically done in science. The defense pointed out that FBI Quality Assurance Standards or the SWGDAM guidelines do not mention extrapolating your lower limits.
Another criticism of OCME's LCN testing is that it does not estimate the number of contributors to a mixture accurately, tending to underestimate the number. Dr. van Daal testified that OCME's validation studies of mixtures showed that the instances of contamination were significant. She further stated that there were instances where the alleles that should not have been seen based on what they knew about the touched items were seen in two of the three PCR amplifications, which based on OCME's protocol would be called as an allele. Dr. Dan Krane testified that if you saw seven alleles at a certain locus, as you would expect to see one or two peaks at any given location, the conclusion should be that the mixture was of at least four individuals. However, Dr. Krane stated that in this example, OCME's interpretation guidelines state that you would conclude that this was a four-person mixture only if you see at least two loci with seven alleles. In Dr. Krane's opinion, with this approach of setting aside the information from the locus with the largest number of alleles, the mischaracterization rate becomes exacerbated. Dr. O'Connor testified that since you know that drop-out and drop-in are more common with low template DNA samples, you would account for them in your interpretation process, which is what OCME had done.
Dr. O'Connor from OCME and Dr. Buckleton testified for the People concluding that LCN DNA testing as performed by OCME is generally accepted in the relevant scientific community as reliable and is reliable for use in criminal casework. In addition, the People introduced Dr. Gill and Dr. Holland's opinions supporting OCME's LCN DNA testing as generally reliable. Dr. O'Connor identified Dr. Buckleton, Dr. Gill and Dr. Holland as members of the relevant scientific community.
For the defense, Dr. van Daal and Dr. Krane testified against LCN DNA testing. Dr. O'Connor testified that Dr. van Daal is a respected scientist in the field. In addition, the defense introduced articles by Dr. Budowle, Dr. Eisenberg and Dr. Chakraborty who argued that LCN testing was not reliable. According to Dr. O'Connor, Dr. Budowle is considered the pioneer of modern day STR testing in the United States, and the hearing testimony was that all the scientists listed are/were highly respected scientists in their fields. It was brought to this court's attention during the Frye hearing that many prominent scientists who were critical of OCME's LCN testing were associated with the University of North Texas. Specifically, Dr. Budowle, Dr. Eisenberg and Dr. Chakraborty were at the University of North Texas, and Dr. van Daal did her sabbatical at the University of North Texas and had recently worked there for a couple of years. It was also brought out during the hearing that the University of North Texas is one of the premier missing persons laboratories in the country. As far as Dr. O'Connor was aware of, University of North Texas used additional cycles on testing on their missing person and identification of human remains cases, and even Dr. Budowle endorsed it for that use. One of Dr. Budowle's reasons for using LCN for missing persons cases is because those cases often do not involve mixtures. From reviewing a prior Frye decision where Dr. Budowle had testified, his position seems to be that LCN can be relevant in criminal cases to the extent that it could produce investigative leads as to developing a suspect. People v. Collins , 49 Misc.3d 595, 575, 15 N.Y.S.3d 564. However, his position was that LCN testing results are not sufficiently reliable to be admissible evidence in a criminal case. Id.
As to the question of whether OCME's protocols and procedures are sufficient to compensate for the stochastic effects and produce reliable results, this court also finds that the People have shown that LCN testing as performed by OCME is generally considered to be reliable within the relevant scientific community. It makes no sense that if it comes to testing for missing person's DNA or post-conviction/Innocence Project that LCN testing is generally accepted science, but in criminal trials that it is not. It is the same science with the same methodology and technique. In a criminal trial, there is of course the additional built-in burden of proof beyond a reasonable doubt that LCN testing, or any other scientific evidence, if admitted must also satisfy before the jury. However, that has nothing to do with general acceptability. Furthermore, this court finds that how much weight should be given to LCN testing results is a matter of weight of the evidence that should be considered by the jury. To the extent that the defendant is arguing that how LCN testing is applied by OCME is not generally accepted or that LCN testing should not have been used in this particular case because the amount of DNA was below the validation threshold, these are issues that the defense can bring before the jury by cross-examination or by calling his own expert witnesses.
After considering all the Frye hearing testimony of all the expert witnesses and the authoritative scientific writings introduced at the hearing, this court finds that the People have met their burden of showing that LCN testing, when properly performed, is generally considered to be reliable within the relevant scientific community. This court finds that all the scientists agree that LCN testing is a methodology based on valid science and stochastic effects are well documented and understood. The fact that LCN testing is used for missing persons cases or post-conviction/Innocence Project cases, and the very fact that these world-renowned scientists are discussing LCN testing show that it is a valid, accepted science and not some bogus testing, and certainly not junk science.
Therefore, considering all the evidence as a whole, this court denies the defendant's motion to preclude LCN testing results.
Forensic Statistical Tool
A probabilistic genotyping software is a tool used by an analyst to help with the conclusion by putting probabilities to different possibilities of genotypes in a sample and then calculating a likelihood ratio. Forensic Statistical Tool ("FST") is the probabilistic genotyping software developed by OCME to calculate likelihood ratios when there is a DNA mixture, and a distinct profile cannot be de-convoluted. The likelihood ratio is set up with one scenario where the person of interest is part of that mixture and the second scenario where the contributor is an unknown, unrelated person. Once all the steps of DNA testing are done and the analyst interprets it and concludes that an individual is included as a possible contributor, then and only then would the analyst use FST to calculate the likelihood ratio. There are different types of probabilistic genotyping software: binary, semi-continuous and fully continuous. Binary is the simplest in that it calculates whether the allele is absent or present. For semi-continuous, you take into account some of the other biologic phenomena, such as, drop-in, drop-out and stutter ratios. For fully continuous, you take into account even more biological aspect of the sample, such as, peak height, peak height ratios, allelic and amplification efficiencies, among other things.
FST is a semi-continuous genotyping software, and STRmix that OCME is currently using is fully continuous. However, Dr. O'Connor testified that while a semi-continuous system is less effective when compared to a fully continuous system because it uses less information from the sample, that does not mean that a semi-continuous system is ineffective. Dr. Curran also testified that while now the consensus is that fully continuous probabilistic genotyping is the way to interpret complex DNA mixtures, in his opinion, this does not make semi-continuous programs unreliable, and stated that the methodology still stands. In addition, Dr. Buckleton pointed out that the FBI had recently validated a new semi-continuous probabilistic genotyping software for work on their criminal case work.
OCME started developing FST in-house in 2009 and went online with it in 2011. Dr. O'Connor testified that around 2009 and 2010 when OCME was developing FST, there was only one probabilistic genotyping software, TrueAllele, that was commercially available. At that time, likelihood ratios were not widely used. However, Dr. O'Connor testified that there were other laboratories that used a probabilistic genotyping program before 2010 such as FSS of the UK that started using a program called LoComatioN around 2007. LoComatioN was made by Dr. James Curran, who testified for the People at this Frye hearing. Dr. Curran did testify that LoComatioN was never put in the market due to internal politics of FSS. It was brought out during cross-examination that Dr. Ian Evett, who is a prestigious member of this scientific community, had opposed the implementation of LoComatioN. In any event, OCME modeled parts of FST after LoComatioN. The hearing testimony was that, today, majority of the labs have moved towards using likelihood ratios as a form of statistics when doing mixture interpretation and adding weight to the conclusion of a DNA analysis. In 2006, members of the International Society of Forensic Genetics ("ISFG"), including Dr. Peter Gill and Dr. Charles Brenner, published an article in Forensic Science International. Dr. O'Connor testified that the scientists at ISFG are a part of the relevant scientific community, and in particular, Dr. Gill and Dr. Brenner are the two pioneers of all the forensic statistics that is used in DNA analysis. This article stated that the advantage of a likelihood ratio framework was that stutter and drop-out can be assessed probabilistically. In addition, SWGDAM's recommendations, dated January 14, 2010, stated that once there is any positive association between an individual to a sample, then a statistic must be performed in order to show the weight of the association. The National Academy of Science also came out with a report recommending that statistics should be applied to all positive associations, instead of just giving a qualitative conclusion of inclusion. Dr. O'Connor testified that since the release of these publications, the majority of the labs have moved towards using likelihood ratios as a form of statistics when doing mixture interpretation.
The development of FST at OCME was led by Dr. Theresa Caragine and Dr. Adele Mitchell. Once FST was developed, an internal validation was conducted. FST was validated for all sample types ranging from one contributor to three contributors. SWGDAM came out with probabilistic genotyping software validation guidelines in 2015, after OCME had validated FST. However, Dr. O'Connor testified that OCME's validation of FST comported with the recommendations included in SWGDAM's 2015 guidelines.
OCME appeared before the DNA Subcommittee a total of four times to get approval to use FST on casework. OCME presented to the Subcommittee the validation material, drop-in and drop-out rate data, the statistical methods that were going to be used by the program, and the logic in how the computational flow was going to be done by the program. In addition, manual calculations were shown that verified the output from the program. Furthermore, the user manual and the validation results including reproducibility, sensitivity and concordance were provided.
On November 13, 2009, FST was first presented to the DNA Subcommittee, and Dr. Prinz and Dr. Mitchell presented the plans for FST and how OCME was going to proceed with the development and validation of the program. On March 5, 2010, Dr. Mitchell presented to the DNA Subcommittee the aspects of FST and how the validation and development were going. This presentation included discussion of degraded samples. The minutes from that meeting stated that Dr. Mitchell had stated that OCME had not yet completed its validation and requested feedback from the members regarding their views.
The third presentation to the Subcommittee was on May 19, 2010. Prior to the third meeting, Dr. Chakraborty, who was one of the members of the Subcommittee, had a question about the independence of the loci when it came to drop-out rates. In response, OCME conducted additional conditional testing that was done to test for possible independence or dependence of drop-out rates per locus, meaning seeing if there is any relationship between the rate of drop-out at locus one versus the rate of drop-out at locus two. This issue was addressed at the third presentation by Dr. Mitchell who stated that based on the tests, there was no pattern or no consistent dependence of the drop-out rate of a locus in comparison to another locus. The meeting minutes from the third presentation stated that the Subcommittee had stated that more work was required before they could vote on this method and the Subcommittee members gave some suggestions to OCME for additional work they would like to see completed regarding the independence testing. On October 8, 2010, FST was presented again to the DNA Subcommittee, and the Subcommittee reviewed and evaluated OCME's FST and offered a binding recommendation to the Commission of Forensic Science that its use by OCME be approved for forensic casework. The Forensic Science Commission then met on December 7, 2010 and voted and approved the use of FST.
As mentioned above, in September of 2017, the Legal Aid Society and the Federal Defender Service made an official complaint to the New York State Inspector General's office against OCME claiming that OCME was engaging in negligent conduct and malfeasance with the way they were performing DNA analysis, specifically referring to the use of LCN and FST. As to FST, in general, the complaint was that changes were made to the software and OCME was using the software without a full validation on the changes and without presenting the changes to the DNA Subcommittee. The complaint was then referred to Forensic Science Commission who then referred it to the DNA Subcommittee. In response, OCME submitted a letter and supplied to the DNA Subcommittee the validation, including the performance checks of the 0.97 cap. Subsequently, the DNA Subcommittee concluded that there were no merits to the complaint. It was pointed out by the People that the majority of the DNA Subcommittee members in 2017 were different from 2010 when FST was initially approved.
As stated above, this court is not equating the approval of FST by the Forensic Science Commission and its DNA Subcommittee with general acceptance in the relevant scientific community. As the Court of Appeals held in Williams , this court is not ruling that the approval by the Forensic Science Commission by itself makes FST a generally accepted science or method. However, once again, the fact that a group of distinguished experts in various aspects of DNA analysis tasked with approving DNA methodologies and setting the standard for DNA labs in the New York State approved FST is certainly relevant and should be considered when deciding the issue of general acceptance.
The day after FST went online for use on casework, OCME learned that there was a case that showed a negative likelihood ratio, which is a mathematical impossibility. It was determined that one of the programmers was making some unrelated changes to the program, some of them cosmetic, and had inadvertently deleted a couple of words from a line of code, which caused incorrect drop-out rates to be associated to the calculation itself in a couple of the picogram ranges. This caused the calculations to be done incorrectly and resulted in the negative likelihood ratio. The program was reverted back to what was written after the validation. In addition, in order to prevent a negative likelihood ratio, Dr. Mitchell made a change to the program, which is referred to as the 0.97 cap. If the frequencies of the alleles that are seen in any given locus in the evidence sample add up to 0.97 or above, then that locus would be given a likelihood ratio of one, making it inconclusive. After the 0.97 cap was added, OCME conducted performance checks to ensure the program was still operating the way it was supposed to. During the performance check, some samples that were previously evaluated during the validation were reevaluated to see what likelihood ratios the program was producing to ensure that it was doing the calculations correctly. In addition, they conducted a non-contributor test of 1,246 non-contributor profiles to ensure the program was calculating correctly after the changes were made. In Dr. O'Connor’s opinion, the performance check demonstrated that the software was performing properly after the corrections were made. This 0.97 cap function has been criticized in part because it was not mentioned in the publication or validation materials of FST, and the software does not alert the users to this rule being invoked. Mr. Nathanial Adams testified that he had not seen change logs or documentation regarding the changes made. However, both Dr. Buckleton and Dr. Curran testified that the 0.97 function was not deliberately hidden in the code, and in fact, the code included comments clearly explaining what the function did. While Dr. Buckleton and Dr. Curran acknowledged that OCME had not released the FST code until a court ordered it to, they both testified that from reviewing the code, it was clear that there was no effort to conceal this function in the code. The defendant pointed out that OCME did not mention the 0.97 cap in the original FST publication. However, Dr. Buckleton testified that from his conversation with Dr. Mitchell, he had learned that it was dropped out during the editing of the final publication, which Dr. Buckleton found to be plausible and stated that such edits do happen to make an article more readable.
Dr. Jeanna Matthews co-authored an article with Dr. Dan Krane, Mr. Nathanial Adams, Ms. Jessica Goldthwaite from Legal Aid Society, Mr. Clinton Hughes, one of the defense counsels on this case, and others where results of FST were compared, one with the allele cap function and one without. Dr. Matthews testified that the results showed that FST with the allele cap function skewed towards inaccuracy, meaning it excluded true contributors and include non-contributors. The article also compared the likelihood ratio results for 28,000 non-contributor samples between FST with the allele cap function and FST without the function, and on each version, there were 23 false positives but were not the same 23 samples. During cross-examination, it was brought out that of the 28,000 non-contributor comparisons done by Dr. Matthew's team, there was only one instance of a false inclusion in the strong support category and one instance of a very strong support category where the allele cap was on. The 0.97 cap has also been criticized as producing results that favor the prosecution. However, Dr. Buckleton testified that the 0.97 function altered things roughly equally for the prosecution and for the defense. In addition, Dr. Curran testified that he had co-authored an article examining FST with Buckleton, Julia Gasston, Maarten Kruijver, Jo-Anne Bright and Simone Pugh, and it was their conclusion that the 0.97 function was not uniformly prejudicial to the defense, and that FST was reliable, and it was not unduly affected by this function.
Another criticisms of FST is that it utilizes the quantitation value to estimate the probability of drop-out. During FST validation, OCME estimated the drop-out rates for single source template quantities ranging from 6.25 to 500 picograms, and for mixtures from 25 to 500 picograms. For evidence samples with DNA template quantities that fell between those drop-out estimation, FST interpolated to determine the appropriate rate to use. When OCME developed FST, they used empirically derived drop-in and drop-out rates, meaning they physically amplified over 2,000 known donor samples and counted how often drop-in and drop-out were seen, and used that in the program itself. The drop-in and drop-out rates were adjusted based on the type of sample, such as whether it was a high template sample, low template sample, two-person mixture, three-person mixture, the quant value, etc. Dr. O'Connor testified that FST, like most semi-continuous probabilistic genotyping systems, did not take into account peak height because once you get into that low copy number range, peak heights are not as good an indication of the amount of DNA in the sample.
Dr. Dan Krane testified at this hearing that OCME and the now inactive Austin Police Department Laboratory were the only laboratories in the world that claimed that they can correlate drop-out with DNA quantity as determined at the quantitation analysis prior to PCR amplification. In addition, Dr. Krane testified that extrapolation is inappropriate in establishing the reliability of a methodology in forensic DNA profiling during a validation study. Dr. Krane further testified that what FST did with drop-out rates below 25 picograms can be characterized as extrapolation. Dr. Eli Shapiro, who was the assistant director of OCME, had testified at another Frye hearing that OCME arbitrarily lowered the drop-out rates below the empirically observed rates and used pristine high quality buccal exemplar swabs to calculate the drop-out rates, which are not appropriate for touch DNA samples. It was Dr. Shapiro's opinion that OCME's claim of underestimating drop-out being conservative is not always right and opined that FST is not generally accepted in the relevant scientific community. Dr. van Daal testified that the quantity of DNA is not an appropriate metric to assess potential stochastic effects that occur during amplification for DNA mixture evidence. Mr. Adams testified at this hearing that he has examined multiple probabilistic genotyping systems and he has not seen quantitation-based drop-out rate function other than with FST.
However, Dr. Buckleton testified that while quantification value is believed to be inferior to peak height, it is not "appalling" or "bad" in any way. In addition, Dr. Buckleton disagreed with Dr. Krane that OCME and the Austin police lab were the only two labs in the world that used quant value to estimate drop-out rates, but in any event, stated that the real question was whether FST method was adequate, not how many labs had used it. In that regard, Dr. Buckleton testified that although his preference is to do this from the peak heights, the method of utilizing quant used by OCME is the single most sophisticated use of quant he has seen. Dr. Buckleton also testified that the values used by OCME most likely produce a conservative result. OCME similarly had stated that it had lowered the drop-out rate with FST because it believed that a lower drop-out rate will result in a more conservative likelihood ratio. Furthermore, Dr. Buckleton testified that SWGDAM's 2010 guidelines and John Butler's 2014 book both mentioned that quant value is a viable method.
Dr. Matthews and Mr. Adams also criticized FST for not adhering to the Institute of Electrical and Electronics Engineers ("IEEE") standard, which is a standard for system, software and hardware verification and validation. IEEE recommends different level of verification and validation of systems depending on the impact or consequences the system will have in society. If the consequences of the system in society would be catastrophic, which is the highest level, IEEE recommends that there should be a technically, managerially and financially independent verification and validation. Dr. Matthews testified that probabilistic genotyping, including FST, should be placed in the category of catastrophic consequences since going to prison for many years would be an extensive financial or social loss. As FST was developed and validated in-house at OCME, Dr. Matthews and Mr. Adams testified that FST did not meet the IEEE standard.
However, the New York State Forensic Commission, SWGDAM, the International Society for Forensic Genetics and the ANAB all do not require probabilistic genotyping programs to meet the IEEE standard. It was also brought out that STRmix and TrueAllele do not conform to the IEEE standard either. It was Dr. Buckleton's understanding that for FST, OCME had contracted with a private company to do the coding, although the coding engineers were actually imbedded at OCME. However, Dr. Buckleton believed that OCME did have the managerial and fiscal independence in place, so in his view, OCME was probably close to conforming to the IEEE standard. But the bottom line is that there are no forensic regulations or requirements that required that the IEEE standards be followed.
In 2016, President's Counsel of Advisors on Science and Technology ("PCAST") published a report stating, "[w]hen further studies are published, it will likely be possible to extend the range in which scientific validity has been established to include more challenging samples. As noted above, such studies should be performed by or should include independent research groups not connected with the developers of the methods and with no stake in the outcome." Dr. O'Connor testified that while such independent research is certainly useful, he does not think that it is necessary in order to establish validity of a program. In addition, Dr. O'Connor testified that PCAST is an advisory body, and not an accrediting body, and as such, their recommendations are not binding. After the PCAST report came out, a few groups within the different disciplines issued statements disagreeing with the conclusions, and within the forensic DNA community, the FBI, the Department of Justice, the National District Attorneys’ Association, the American Academy of Forensic Science and Dr. Bruce Budowle disagreed with it. The People pointed out that none of the authors of the PCAST report were forensic scientists. However, the defense brought out during re-cross that PCAST did consult with experts including John Butler, Kareem Belt (a former analyst at OCME), John Buckleton, Bruce Budowle, Itiel Dror, Ian Evett, Gleen Langenburg, Catherine Grgicak and Norah Rudin.
The fact that OCME switched from FST to STRmix was discussed at the hearing as well. In the fall of 2016, OCME notified its customers that it would be switching to a probabilistic genotyping program called STRmix. Dr. O'Connor testified that OCME had validated FST to be used on samples that were amplified by the Identifiler testing kit. Therefore, once OCME switched from Identifiler to Fusion, OCME had to revalidate FST in order to use it on samples that were amplified with Fusion. However, instead of investing the resources and time to revalidate FST, OCME decided to use a commercially available kit called STRmix. Furthermore, STRmix is a fully continuous probabilistic genotyping software, compared to FST that is semi-continuous, so it takes into account more information from the DNA sample as it calculates the different probabilities and the likelihood ratio. However, Dr. O'Connor testified that FST is still online to be used to calculate a likelihood ratio in situations where a sample had been amplified with the Identifiler kit.
In Rochat , the New Jersey Appellate Court found that FST was not generally accepted as reliable within the relevant scientific community. Rochat , 470 N.J.Super at 441-442, 269 A.3d 1177. In that case, the court stated that as FST was used and examined only by OCME, and all that the court had was that FST was approved by the DNA Subcommittee, which standing alone did not establish that FST was widely accepted as reliable. Id. It is true that OCME is the only lab that had used FST. However, the evidence at this hearing was that FST has been tested and subjected to peer review. Jones , 2018 WL 2684101 at 12. Dr. O'Connor testified that OCME had given lectures, presentations and workshops involving FST on many occasions. In addition, OCME published the FST validation summary in 2012, the year after its validation, in a scientific journal, which was also presented at a forensic science conference. Dr. O'Connor testified that OCME had published two papers describing the FST validation. In 2012, International Society of Forensic Genetics ("ISFG") published an article in the Journal of Forensic Science International of Genetics. Dr. Gill was one of the authors of this article. The article stated that the probability of drop-out can be estimated by logistic analysis or by using an empirical approach, and referenced a paper OCME had published outlining the development and validation of FST. This shows that FST has been presented to and discussed in peer-reviewed journals. Therefore, unlike the court in Rochat , this court is not relying solely on the fact that FST was approved by the DNA Subcommittee. Rochat is not controlling authority in New York State and this court for reasons stated just above, declines to follow Rochat , and in fact, disagrees with its conclusions. Furthermore, Rochat seems to put in yet an additional requirement of "clearly establishing" the Frye requirement that is not part of the rule in New York.
At this hearing, in addition to Dr. O'Connor from OCME, Dr. Buckleton and Dr. Curran testified for the People stating that FST software is a semi-continuous probabilistic genotyping program that is suitable to be used in forensic casework. In addition, it was their opinion that the methods used in FST are considered reliable and generally accepted in the scientific community. Dr. Hilda Haned of the Netherlands Forensic Institute, who had worked with Dr. Gill to develop a likelihood ratio program, testified at a prior Frye hearing that FST is a reliable method for determining a likelihood ratio. Collins , 49 Misc.3d at 616, 15 N.Y.S.3d 564. In particular, Dr. Haned was impressed with OCME's use of quant to calculate which drop-in and drop-out probability statistics to employ. Id.
For the defendant, Dr. van Daal and Dr. Krane testified that FST was not generally accepted in the scientific community. Dr. Budowle had opposed the admissibility of FST for criminal court use at a prior Frye hearing. Dr. Heather Coyle, a professor of forensic science at the University of New Haven, and who had worked at the state DNA laboratory in Connecticut had testified at a prior hearing that FST was not generally accepted in the relevant scientific community. Dr. O'Connor testified that Dr. Coyle is respected in the field. Dr. Shapiro, who Dr. O'Connor also considered as part of the relevant scientific community and who had worked at OCME, had testified at a prior hearing that in his opinion, FST is not generally accepted in the relevant scientific community. The defense also introduced that Dr. Alan Jamieson from Scotland had testified at a prior hearing stating that FST was not generally accepted in the relevant scientific community. Dr. Matthews and Mr. Adams criticized FST from a computer scientist and a software engineer's point of view stating that FST was not independently validated or verified, and it did not sufficiently record changes made to the coding.
Dr. Budowle had testified that the FST was unique in the way it determines drop-in and drop-out. Dr. Buckleton testified that it was his understanding that Dr. Budowle had concerns with the assignment of the probability of drop-out and drop-in, and especially the use of the quantitation value. However, Dr. Buckleton believed that while Dr. Budowle had a problem with the implementations of the drop model, he did not have a problem with the principle itself.
Considering all the evidence before this court, this court finds that the use of a probabilistic genotyping software is generally accepted in the relevant scientific community as reliable. In fact, it is this court's understanding that majority of the labs today do use some type of a probabilistic genotyping software as recommended by the National Academy of Science, ISFG and SWGDAM. In addition, it is clear to this court that a fully continuous probabilistic genotyping software is an advancement of the semi-continuous ones, which includes FST, and is the preferred software today. However, even the defense's experts did not testify that a semi-continuous probabilistic genotyping software is not reliable all together. Furthermore, based on all the evidence presented at this hearing, this court finds that the People have met their burden of showing that FST, as used by OCME, is generally accepted in the relevant scientific community. FST comports with the recommendations of SWGDAM and there appears to be a consensus that FST is in fact a generally accepted tool of a valid science.
The criticism brought forth by the defense experts concerning FST have been sufficiently addressed and answered by the People's experts’ opinion. It is well established that for a scientific technique or method to be considered generally accepted in the relevant community, it does not have to be unanimously indorsed, meaning that criticism and debate on the issue can exist. While the defense's expert witnesses opined that FST was not generally accepted in the relevant scientific community and stated that it should not be admitted in criminal court, this court finds that their arguments have been answered persuasively by the People's experts. The arguments made by defense experts have to do with the weight of the evidence, not the admissibility. For instance, the defendant can cross-examine witnesses and present his own expert witnesses to criticize OCME's use of drop-out rate based on quantification value or the 0.97 cap implemented. This court finds that it is the jury's function to weigh the credibility of competing scientific opinions and to determine the appropriate weight to give to the FST evidence.
Therefore, considering all the evidence as a whole, this court denies the defendant's motion to preclude FST evidence.
In conclusion, in following the holding in Williams , this court did not simply rely on the validations and testing of OCME. Of course, it was necessary for this court to consider where this purported new evidence is coming from, namely, the OCME. OCME is clearly the leading laboratory in the field of DNA in the country. Both in size and volume, it is the forerunner in the field and not just some "mom and pop" store. The courts are tasked with the responsibility in conducting Frye hearings to be the gatekeeper for evidence that clearly does not deserve to be presented before a jury, including any "junk science." However, this court did not just accept the evidence put forth by OCME just because of its size or reputation. This court evaluated and analyzed all the expert testimony in this case from both sides as well as the various literatures they testified to. Ultimately, this court was more convinced by the People's expert testimony and was sufficiently persuaded that the People met the burden of showing that the offered evidence, the LCN testing and FST, met the Frye standard of acceptability within the relevant scientific community. Therefore, these evidences can be presented to the jury at trial.
Addressing the issue of whether it should be presented to the jury, the court in Collins stated that as the experts in the DNA field cannot agree on the weight to be given to evidence by LCN testing and FST, it would not make sense to throw such evidence before a lay jury and ask the jurors to give the evidence appropriate weight. Collins , 49 Misc.3d at 612, 15 N.Y.S.3d 564. However, this court respectfully, but strongly disagrees. It has always been the jury's role to give appropriate weight to the admitted evidence. No one may presume what weight a jury would give to any admitted evidence and no court can usurp that role.
In the federal courts, so long as the evidence satisfies the lower threshold of Daubert , it is given to the jury for them to give it appropriate weight. While this court is fully aware that Frye and Daubert standards are different, it makes absolutely no sense that a federal jury can evaluate and give LCN testing and FST their appropriate weights, but a state jury cannot. In fact, state jurors are given even more screened evidence because of a higher Frye threshold.
Wherefore, for the reasons stated above, the defendant's motion to preclude the DNA evidence in his case is denied.
The foregoing constitutes the decision of the court.