Opinion
2:20-cv-01503-JHC
06-08-2023
ORDER RE: MOTION TO EXCLUDE CERTAIN OPINIONS OF LAUREN R. KINDLER AND NXP'S MOTION IN LIMINE
John H. Chun United States District Judge
There are two motions before the Court. First is NXP's motion to exclude certain opinions of expert Lauren R. Kindler. Dkt. # 279; see also Dkt. # 339 (reply brief). Impinj opposes the motion. Dkt. # 326. Second is one of NXP's motions in limine. Dkt. # 458. The Court previously reserved ruling on this motion in limine. Dkt. # 516 at 2.
The Court issued an order on June 2, 2023, setting a Daubert hearing. Dkt. # 513. That order allowed the parties to submit additional materials and asked several questions. Id. Kindler filed a declaration on June 6, 2023, Dkt. # 517, and NXP filed a supplemental response on June 7, 2023, Dkt. # 518. The Court held a Daubert hearing on June 8, 2023. Dkt. # 521.
For the reasons below, the Court:
(1) GRANTS NXP's motion to exclude certain opinions of Lauren R. Kindler in part, DENIES it in part, and STRIKES it in part as moot. Dkt. # 279.
(2) GRANTS in part and DENIES in part NXP's second motion in limine. Dkt. # 458.
I
Legal Standards
An expert witness may provide opinion testimony if “(a) the expert's scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue; (b) the testimony is based on sufficient facts or data; (c) the testimony is the product of reliable principles and methods; and (d) the expert has reliably applied the principles and methods to the facts of the case.” Fed.R.Evid. 702.
“Before admitting expert testimony into evidence, the district court must perform a gatekeeping role of ensuring that the testimony is both relevant and reliable under Rule 702.” United States v. Ruvalcaba-Garcia, 923 F.3d 1183, 1188 (9th Cir. 2019) (quotation marks omitted) (quoting Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 597 (1993)). But “[u]nder Daubert, the district judge is a gatekeeper, not a fact finder.” Primiano v. Cook, 598 F.3d 558, 564-65 (9th Cir. 2010) (citation and quotation marks omitted). “Shaky but admissible evidence is to be attacked by cross examination, contrary evidence, and attention to the burden of proof, not exclusion.” Id. at 564. “In Daubert, the Court addressed the proper standard for admitting expert testimony and emphasized that the focus ‘must be solely on principles and methodology, not on the conclusions that they generate.” Apple Inc. v. Motorola, Inc., 757 F.3d 1286, 1313-14 (Fed. Cir. 2014) (quoting Daubert, 509 U.S. at 595), overruled on other grounds by Williamson v. Citrix Online, LLC, 792 F.3d 1339 (Fed. Cir. 2015) (en banc).
II
Motion to Exclude Certain Opinions of Kindler
A. Panduit Analysis for Lost Profit Damages
NXP argues that Kindler misapplied the Panduit factors used to calculate lost profit damages. Dkt. # 279 at 10-12. But in a previous order, the Court held that NXP may not seek lost profit damages. Dkt. # 452 at 13-21.
Accordingly, the Court strikes this portion of the motion as moot.
B. Reliance on Oliver for Valuation Opinions
“[W]here multi-component products are involved, the governing rule is that the ultimate combination of royalty base and royalty rate must reflect the value attributable to the infringing features of the product, and no more.” Ericsson, Inc. v. D-Link Sys., Inc., 773 F.3d 1201, 1226 (Fed. Cir. 2014). Thus, “[w]hen the accused infringing products have both patented and unpatented features, measuring this value requires a determination of the value added by such features.” Id.; see also ResQNet.com, Inc. v. Lansa, Inc., 594 F.3d 860, 869 (Fed. Cir. 2010) (“[A] reasonable royalty analysis requires a court to hypothesize, not to speculate.... [T]he trial court must carefully tie proof of damages to the claimed invention's footprint in the market place.”).
In her report, Kindler concludes that she “understand[s]” that the accused FastID feature accounts for approximately 1% - 2% of the value of Impinj's products, while the accused TagFocus feature accounts for approximately 3% - 4% of the value. Dkt. # 281-3 at 37. She also concludes that the unaccused features would comprise “no less than 90% of the value of Monza 4 relative to Monza 3.” Id. These conclusions are based on “discussion[s] with Mr. Ron Oliver,” an Impinj employee. Id. Kindler also bumps the upper bound of her estimate of the value of both TagFocus and FastID to 5% each. Id. (“Assuming that (i) FastID accounts for 1% - 5% and (ii) Tag Focus accounts for 3% - 5% of the value of Monza 4 relative to Monza 3. . .”).
NXP argues that Kindler's apportionment opinions are unreliable and should be excluded. Dkt. ## 279 at 12-16, 339 at 4-6. Specifically, NXP asserts that “Kindler was required to apportion the value of the accused FastID and TagFocus features in a manner that captures the value attributable to the asserted patents.” Id. at 12. But, NXP says, the apportionment percentages that Kindler purportedly assigns to these features are not based on her own analysis or conclusions. Id. Instead, NXP says that Kindler's opinions are based entirely on the apportionment figures provided to her by Impinj's employee, Ron Oliver, which Kindler then uses as the foundation of her analysis. Id. The Court agrees with NXP that Kindler's quantitative apportionment opinions are not based on a sufficiently reliable methodology and are therefore subject to partial exclusion. But the Court believes her qualitative apportionment opinions are sufficiently reliable, so the Court does not exclude those.
Calculating a reasonable royalty rate necessarily entails estimation. But an expert's apportionment analysis may not be “plucked out of thin air.” LaserDynamics, Inc. v. Quanta Comput., Inc., 694 F.3d 51, 69 (Fed. Cir. 2012); see also id. (criticizing expert's methodology in which his “one-third apportionment to bring his royalty rate down from 6% per ODD to 2% per laptop computer appears to have been plucked out of thin air based on vague qualitative notions of the relative importance of the ODD technology.”). “Experts must follow some discernable methodology, and may not be a black box into which data is fed at one end and from which an answer emerges at the other.” NetFuel, Inc. v. Cisco Sys. Inc., No. 5:18-CV-02352-EJD, 2020 WL 1274985, at *2 (N.D. Cal. Mar. 17, 2020) (quoting GPNE Corp. v. Apple, Inc., 2014 WL 1494247, at *4 (N.D. Cal. Apr. 16, 2014)). “While the Federal Circuit allows for ‘some approximation' in the reasonable royalty context, this ‘does not negate the Federal Circuit's requirement of sound economic and factual predicates for that analysis.'” Id. at *7 (quoting Cornell Univ. v. Hewlett-Packard Co., 2008 WL 2222189, at *2 (N.D.N.Y. May 27, 2008) (Rader, C.J., sitting by designation) (quotation marks omitted)).
The Court concludes that Kindler's quantitative apportionment analysis does not adhere to a sufficiently reliable methodology. Kindler relies almost entirely on the apportionment figures provided to her by Impinj employee, Ron Oliver. For example, her report states: “Based on a discussion with Mr. Ron Oliver, I also understand that (1) FastID accounts for approximately 1% - 2% and (2) TagFocus accounts for approximately 3% - 4% of the value of Monza 4 relative to Monza 3.” Dkt. # 281-3 at 37. She also concludes-“[b]ased on a discussion with Mr. Ron Oliver”-that the unpatented features would comprise “no less than 90% of the value of Monza 4 relative to Monza 3.” Id. Her report provides no meaningful explanation of how she arrived at these figures aside from her conversations with Oliver.
During her deposition, Kindler stated that her apportionment estimates were provided to her by Oliver:
I got [Oliver's] reaction to Mr. Haas's opinions but also his own independent evaluation as to the relative contribution of these features . . . And then I asked him “Well, if you had to come up with a specific quantifiable contribution for FastID and TagFocus, like, what do you think would be a reasonable amount?” And he gave me the 1 to 2 percent for FastID, and I think it was . . . 3 to 4 percent for TagFocus.... [W]e had a lengthy call talking through the various features of Monza 4 relative to Monza 3 and what Mr. Haas and NXP's technical expert had opined to and got his reaction to all that and his own independent assessment based on his own expertise.Dkt. # 281-7 at 15-16; see also id. at 19 (conceding that her “upper bound” for the benefit of each allegedly patented feature was based on what “he [Oliver] concluded” about the relative benefits of the product features). Kindler admitted that she could not explain how Oliver arrived at his conclusions. See id. at 18 (“Q: On what basis did Mr. Oliver form his opinion that TagFocus feature was worth the whole - the FastID feature? A: . . . Ultimately, he - I don't know why. I mean, we didn't get into the details as to why, but he concluded that the relative contribution of TagFocus would be approximately 3 to 4 percent based on his experience.”); id. at 23 (“Well, I said at the beginning I don't know how he came up with the numbers.”); id. at 12-13, 18-19. She admitted that she did not understand why, for example, Oliver gave TagFocus more weight than FastID. Id. at 18-19 (“Q: But you don't know whether [the frequency of use of the accused features was] the reason he opined to 3 to 4 percent for TagFocus versus 1 to 2 percent for FastID? A: No. I'm - I'm connecting some dots, but no, I don't know if that's the reason.”). And she admitted that if Oliver's conclusion that “at least 90%” of the products' value came from unpatented features was incorrect, that would have affected the outcome of her analysis. Id. at 17, 27 (“Now, the 90 percent could matter, if he's wrong on that, but I don't have any reason to believe that he's wrong on that.”).
Worse yet, Oliver-whose conclusions serve as the backbone of Kindler's apportionment analysis-admitted in deposition (in the co-pending Texas litigation) that he generally could not quantitatively isolate the value of a given patented feature. For example, he said that he “cannot assign to each feature individually a value and then further within each feature I cannot dissect that down into individual innovations and the contribution from [the patents].” Dkt. # 341-1 at 10; see also id. at 8 (“Q: You said it would be hard to assign a percentage, but is it possible to even assign a percentage? . . . A: That would be a question for the market....[M]y only response would be that that would be a question for the market, that Impinj does not have enough information to answer.”); id. (“[I]t would be really hard to assign a - a percentage value to that.”); id. at 6 (“Q: What is the methodology that Impinj undertakes to do that, you mentioned [customer] comments, is there some - does that get formalized somehow in terms of a ranking or a priority or a value? A: I don't know that there is a formalized ranking. In my experience, there have been discussions, sometimes involving sales personnel, sometimes involving marketing that just on a qualitative basis look at what's the perceived value. It is very hard to assign any kind of a quantity in the ranking system.”).Indeed, when questioned about apportionment, Oliver implied that he is not particularly well-suited to assess relative value of features. See Dkt. # 5191 at 19-20 (“[T]here are people that are closer to the market than I am that would be in a better position to answer. Those would be marketing people or they would be members of the sales team.”). And critically, because Impinj did not designate Oliver as an expert witness (Dkt. # 330 at 14), he did not prepare an expert report. So there is no documentary evidence of any kind explaining how he reached his quantitative apportionment opinions.
Oliver's testimony in the co-pending California litigation was directed to different patents and, in some cases, different products. But his deposition testimony is generalizable: Nothing in his deposition testimony suggests that he would have answered differently for the patents and accused products at issue in this case. And in fact, he was asked at this deposition about the accused FastID and TagFocus features.
To be sure, Kindler did explain that Oliver's opinions were “consistent [] at least qualitatively” with “documentary evidence” that she reviewed. Dkt. # 281-7 at 24. She also explained that she had a “multistep discussion” with Oliver about his opinions, during which she “tried to inquire further . . . to understand . . . how he arrived to the 90 percent figure.” Id. at 26. And elsewhere in her report, Kindler explains that press releases and other documentary evidence emphasized features of the accused products other than TagFocus and FastID, suggesting that those features account for a limited share of the products' total value. Dkt. # 281-3 at 24-29.
But this does not cure Kindler's unexplained reliance on the figures provided by Oliver. See Netfuel, 2020 WL 1274985, at *7 (“This is fatal; without explaining how he arrived at ‘33%,' the figure appears to have been ‘plucked out of thin air based on vague qualitative notions of the relative importance of [the patented features.]'” (quoting LaserDynamics, 694 F.3d at 69)). Kindler does not explain how she came up with the 3-4% figure, the 1-2% figure, or the no-more-than-90% figure. The documentary materials relied on by Kindler may suggest that NXP's apportionment overstates the value of the accused features. But there is no evidence in the report or deposition testimony that she did, in fact, rely on such evidence to calculate her particular apportionment figures. Instead, the only reasonable interpretation of her report is that she adopted the figures provided by Oliver without independent evaluation. And how Oliver calculated those figures is anyone's guess. See Guardant Health, Inc. v. Found. Med., Inc., No. CV 17-1616-LPS-CJB, 2020 WL 2461551, at *18 (D. Del. May 7, 2020) (requiring “an evidentiary foundation for the particular percentage selected-i.e., for describing the methodology used to reach that number”), report and recommendation adopted, No. CV 17-1616-LPS-CJB, 2020 WL 5994155 (D. Del. Oct. 9, 2020); cf. Netfuel, 2020 WL 1274985, at *9 (admitting expert opinion when the expert “explains the facts underlying his apportionment percentages, how they connect to his apportionment percentages, and the methodology he used to arrive at his apportionment percentages”).
To be clear, the Court does not imply that a damages expert's reliance-even heavy reliance-on a technical expert is inappropriate in apportionment analysis. Nor does the Court imply that reliance on a party's own employee is improper. See Apple Inc. v. Motorola, Inc., 757 F.3d 1286, 1322 (Fed. Cir. 2014) (“A rule that would exclude Apple's damages evidence simply because it relies upon information from an Apple technical expert is unreasonable and contrary to Rules 702 and 703 and controlling precedent.”). To the contrary, a technical expert or a party's employee may be particularly well-positioned to understand the relative technological benefits of the accused products. See, e.g., Realtime Data, 2017 WL 11574028, at *5; Apple, 757 F.3d at 1322 . And because there is often a connection between a feature's technological value and its economic value, economic apportionment may correlate-even strongly correlate- with the relative technological importance of the patented feature. See Realtime Data, 2017 WL 11574028, at *5.
But even if, as a general matter, damages experts may rely on party employees or technical experts, on these facts, Kindler was not entitled to wholesale adopt the apportionment figures of Oliver in the manner she did.
First, by Oliver's own admission, he could “only assign relative rankings,” not “an exact market value to that feature set.” Dkt. # 519-1 at 28 (emphasis added). It was unreasonable for Kindler to rely nearly exclusively on quantitative estimates from an employee who admitted that he could not provide quantitative estimates. The ipse dixit of an expert-itself relying on the ipse dixit of a third-party-cannot alone provide a sufficiently reliable methodology. See Gen. Elec. Co. v. Joiner, 522 U.S. 136, 146 (1997) (“[N]othing in either Daubert or the Federal Rules of Evidence requires a district court to admit opinion evidence that is connected to existing data only by the ipse dixit of the expert.”).
Second, if Kindler wanted to rely on Oliver's apportionment estimates rather than calculate her own, Kindler needed, at minimum, to ensure that Oliver's methodology was adequately explained, either in her report or elsewhere in the record. See NetFuel, 2020 WL 1274985, at *6 n.5 (“[B]ecause Mr. Bratic's analysis depends on the validity of Dr. Rubin's analysis, in order for Mr. Bratic's analysis to stand, Dr. Rubin's figure must be ‘reliable.'”). For example, Kindler could have explained the assumption that led Oliver (and her) to conclude that FastID is less valuable than TagFocus, or how Oliver determined that True3D was worth 3035%, rather than, say, 20% or 40%. Without such explanations, the Court is left without knowing whether Oliver's estimates (and by extension, Kindler's estimates) are methodologically sound or amount to speculation. See id. at *9 (“[B]ecause Dr. Rubin failed to provide the ‘methodology' underlying his apportionment amount or explain how he arrived at that figure based on the facts of this case, his apportionment opinion is not backed by ‘sufficient facts or data' or by ‘reliable principles and methods.'”).
Third, based on her deposition testimony, the most plausible inference is that Kindler adopted Oliver's quantitative figures without much independent analysis of her own. See Dkt. # 341-1 at 18 (“Q: On what basis did Mr. Oliver form his opinion that TagFocus feature was worth the whole - the FastID feature? A: . . . Ultimately, he - I don't know why. I mean, we didn't get into the details as to why, but he concluded that the relative contribution of TagFocus would be approximately 3 to 4 percent based on his experience.”); id. at 23 (“Well, I said at the beginning I don't know how he came up with the numbers.”); id. 18-19 (“Q: But you don't know whether [the frequency of use of the accused features was] the reason he opined to 3 to 4 percent for TagFocus versus 1 to 2 percent for FastID? A: No. I'm - I'm connecting some dots, but no, I don't know if that's the reason.”).
The Court does not suggest that a damages expert's apportionment opinions must be perfect. Far from it: “Determining a fair and reasonable royalty is often . . . a difficult judicial chore, seeming often to involve more the talents of a conjurer than those of a judge.” ResQNet.com, 594 F.3d at 869 (quoting Fromson v. Western Litho Plate & Supply Co., 853 F.2d 1568, 1574 (Fed. Cir. 1988)); see also Aqua Shield v. Inter Pool Cover Team, 774 F.3d 766, 771 (Fed. Cir. 2014) (explaining that royalty calculations often involve “approximation and uncertainty”); Virnetx, Inc. v. Cisco Sys., Inc., 767 F.3d 1308, 1328 (Fed. Cir. 2014) (stating that “absolute precision” is not required in apportionment, as “it is well-understood that this process may involve some degree of approximation and uncertainty”); Whitserve, LLC v. Comput. Packages, Inc., 694 F.3d 10, 31 (Fed. Cir. 2012) (“[M]athematical precision is not required.”). Apportionment necessarily requires some approximation, and an expert should not be excluded simply because she relies on proxies for relative value, provides only rough estimates, or otherwise cannot perfectly quantify the relative value of features. NXP's apportionment analysis is undoubtedly imperfect, too. See Dkt. # 287-2 at 90-101 (relying, for example, on proxies to determine valuation). But while the Federal Circuit permits “some approximation” in calculating a reasonable royalty, this “does not negate the Federal Circuit's requirement of ‘sound economic and factual predicates' for that analysis.” Cornell Univ., 2008 WL 2222189, at *2 (Rader, C.J., sitting by designation) (quoting Riles v. Shell Expl. & Prod. Co., 298 F.3d 1302, 1311 (Fed. Cir. 2002)). Nor can an expert's calculations amount to little more than a “black box into which data is fed at one end and from which an answer emerges at the other.” GPNE Corp., 2014 WL 1494247, at *4.
Here, Kindler does not explain how she landed on her 3-4% figure, the 1-2% figure, or the no-more-than-90% figure besides the ipse dixit of Oliver. Oliver, in turn, provides no methodology of his own for how he arrived at these figures. The Court recognizes that qualitative inputs and subjective impressions can inform apportionment analysis.But the expert must explain how those inputs lead to the calculation of the specific figures provided. Kindler's report fails to provide a reliable methodology for her quantitative apportionment conclusions, and thus is subject to partial exclusion.
And indeed, Kindler reviewed documents-like third-party articles-in explaining why certain features like True3D are more valuable than others.
Impinj makes several arguments in response, but none is persuasive.
First, Impinj points to NXP expert David Haas's reliance on NXP's employee, Ralf Kodritsch, for certain facts in his apportionment analysis. Dkt. # 330 at 16. It is not lost on the Court that there are similarities between Haas's reliance on Kodritsch and Kindler's reliance on Oliver.After all, Kodritsch provides critical information to Haas about the relative value of various features just as Oliver does.
And as noted by Impinj's counsel at the Daubert hearing, the Court recognizes that by telling Haas that certain features have no value, Kodritsch was, in a sense, providing a quantitative assessment of a given feature, assigning it 0% value.
But the Court believes there is an important distinction between Kindler's reliance on Oliver and Haas's reliance on Kodritsch. Unlike Kindler, Haas relied on Kodritsch's input about the relative importance of features to customers before generating his own apportionment figures. See, e.g., Dkt. # 287-2 at 99. Kodritsch does not provide the apportionment valuations himself. And Haas also explains some assumptions behind Kodritsch's conclusions (e.g., that “inlay compatibility” was “not a specific product benefit,” but “simply a function of the fact that all of the chips were built on the same silicon,” and that “memory write speed beyond a certain point was not a feature that drove customer purchase decisions, especially among industrial users”). Id. at 94.
But more fundamentally, Haas explains in his report how he uses Kodritsch's input to arrive at his particular apportionment figures. Haas explains that he started with 14 features, eliminated 5 that were present in the prior version of the product, discounted 3 that he believed present limited economic value, and then applied some methodology (borrowing Kindler's 25% figure for read sensitivity, and dividing the remaining value equally among five other features) to determine his ultimate apportionment figures. One can certainly take issue with Haas's methodology (and indeed, Kindler does)-it is debatable at best, for example, whether the remaining five features warrant equal consideration, or whether True3D should not be given any weight. But at the very least, one can discern (and evaluate) the methodology used, however imperfect. NetFuel, 2020 WL 1274985, at *7 (requiring a “discernable methodology”). By contrast, there is no explanation about how Kindler (or Oliver) determined that TagFocus is worth 3-4%, FastID is worth 1-2%, True3D is worth 30-35%, read sensitivity is worth 25-30%, or memory options are worth 20%. Dkt. # 281-3 at 37.
Second, Impinj argues that Kindler could rely on Oliver's opinions because under Federal Rule of Evidence 703, “an expert may rely on otherwise inadmissible facts or data, such as purportedly technical opinions of lay witnesses, if ‘experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject' and ‘their probative value in helping the jury evaluate the opinion substantially outweighs their prejudicial effect.'” Dkt. # 330 at 14 (citing Fed.R.Evid. 703). But Kindler does not rely on Oliver for the “facts or data” he provides, but rather for his valuation opinions. And there is no indication that an expert in the field would wholesale adopt an employee's valuation opinions as their own without questioning or even fully understanding the basis for those opinions. Kindler does not incorporate Oliver's technical conclusions as an input to her economic valuation opinions; she adopts them in their entirety as her own economic apportionment conclusion. Kindler cannot smuggle Oliver's purportedly lay opinions (again, Oliver has not been identified as an expert) into evidence under the guise of Rule 703 by reciting Oliver's apportionment opinions.
Third, Impinj argues that Kindler could rely on Oliver even if he is a lay, rather than expert, witness. Dkt. # 330 at 14. But however Oliver's testimony is labeled, there must be some discernable method-provided in Kindler's report, in a document drafted by Oliver, or elsewhere-that explains the methodology used to reach the apportionment figures. And because Oliver was not designated as an expert, he produced no report in which he could provide such an explanation.
Ultimately, assessing the relative value of the patented features in a product is one of the core functions of a damages expert when calculating a reasonable royalty. This assessment necessarily undergirds much of the overall reasonable-royalty analysis. Before outsourcing that role to a third-party (and a non-expert, at that), the damages expert must provide some explanation of the methodology used. See NetFuel, 2020 WL 1274985, at *7 (barring expert's calculations that represent a “black box” without a “sound economic and factual predicate[]” or “discernable methodology”).
Accordingly, the Court excludes Kindler's quantitative apportionment opinions (and any other quantitative opinions that directly rely on those apportionment opinions). But the Court does not exclude Kindler's qualitative opinions. Kindler may argue, for example, that Haas's royalty rate is too high (even far too high), that Haas's equal apportionment of value among features is improper, or that Haas otherwise failed to account for the value of features like True3D. Kindler may use words that have carry some relative quantitative meaning (e.g., she may opine that Haas “vastly overstates” the value of TagFocus, that the value of TagFocus to customers is extraordinarily “small,” that True3D is the “primary driver” of demand, etc.). And she may generally explain how Haas's overvaluation of TagFocus leads to an inflated royalty rate. The Court precludes Kindler only from offering a precise, numerical apportionment percentage for the TagFocus feature (e.g., 3-4% or 3-5%).
C. Kindler's Purported Lack of “Starting Point” for Reasonable Royalty Analysis
NXP says that Kindler applies each of the Georgia-Pacific factors to adjust upward or downward her reasonable-royalty calculations. See Georgia-Pacific v. U.S. Plywood Corp., 318 F.Supp. 1116, 1120 (S.D.N.Y. 1970); Dkt # 279 at 16-17. But, NXP says, her analysis lacks a reliable starting point from which she could make those adjustments. See, e.g., Dkt. # 279 at 17 (“It is unhelpful to the jury to opine that certain facts would exert an upward pressure on the rate without specifying upward from what.”).
The Court disagrees in part. Throughout her report, Kindler often relies on the royalty rates presented by NXP's expert, Haas, as a starting point. See, e.g., Dkt. # 287-11 at 34. She then adjusts Mr. Haas's rates using the Georgia-Pacific factors. Id. (downwardly adjusting Haas's estimate of the value of the accused features). This makes sense: Her report is a rebuttal report. The Court finds nothing improper about using the royalty rates provided by the opposing party as a starting point of analysis. Cf. Dkt. # 287-2 at 95 (Haas borrowing Kindler's estimates as a proxy).
But as stated above, Kindler may not provide quantitative apportionment opinions. Accordingly, when discussing any Georgia-Pacific factors relating to apportionment, Kindler may not “adjust” Haas's apportionment calculations with her own quantitative calculations (or rather, Oliver's). She may still qualitatively explain why she disagrees with Haas's assessment of any given Georgia-Pacific factor.
III
NXP's Motion in Limine
NXP moves to preclude any evidence or argument from “Mr. Oliver or Ms. Kindler” “regarding Mr. Oliver's opinions on issues of valuation and apportionment.” Dkt. # 458 at 5-8.
The Court reserved ruling on this motion in limine pending the Daubert hearing. Dkt. # 516 at 2.
The Court agrees in part and disagrees in part. For reasons similar to those outlined above, Oliver may not present his quantitative valuation opinions. As a lay witness, Oliver may not offer opinions that are based on “scientific, technical, or other specialized knowledge within the scope of Rule 702.” Fed.R.Evid. 701. Oliver's precise quantification of the relative value of various features developed specifically for purposes of litigation is at least arguably within the realm of expert testimony under Rule 702, at least when such opinions were not developed in the course of the employee's ordinary job duties. Even if not expert testimony, as described above, Oliver's statements indicate that he cannot reliably quantify the relative values of various features. So regardless of how his testimony is characterized, he is not qualified to offer precise, quantitative apportionment figures.
But Oliver may testify about facts that he knows. He can testify (consistent with other rules of evidence) that based on his role at Impinj, certain features are more important than others. For example, he can testify about his observation that in “internal marketing presentations,” TagFocus is “further down in that list [of features].” Dkt. # 519-1 at 28. He can also testify as to any facts or opinions developed as a result of his role at Impinj (e.g., based on conversations with customers).
So the Court grants the motion in limine in part and denies it in part.
IV
Conclusion
For the reasons stated, the Court:
(3) GRANTS NXP's motion to exclude certain opinions of Lauren R. Kindler in part, DENIES it in part, and STRIKES it in part as moot. Dkt. # 279.
(4) GRANTS in part and DENIES in part NXP's second motion in limine. Dkt. # 458.