Microsoft Corporationv.Ip-Learn Focus LLCDownload PDFPatent Trial and Appeal BoardMar 23, 201613831568 (P.T.A.B. Mar. 23, 2016) Copy Citation Trials@uspto.gov Paper No. 33 571-272-7822 Entered: March 23, 2016 UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Microsoft CORPORATION, Petitioner, v. IpLearn-Focus, LLC, Patent Owner. ____________ Case IPR2015-00097 Patent 8,538,321 B2 ____________ Before MICHAEL W. KIM, RICHARD E. RICE, and JEREMY M. PLENZLER, Administrative Patent Judges. RICE, Administrative Patent Judge. FINAL WRITTEN DECISION 35 U.S.C. § 318(a) and 37 C.F.R. § 42.73 IPR2015-00097 Patent 8,538,321 B2 2 I. INTRODUCTION A. Summary Microsoft Corporation (“Petitioner”) filed a Petition (Paper 2, “Pet.”) requesting an inter partes review of claims 1–5, 11–13, 24–28, 34–37, 39, 44, and 52 of U.S. Patent No. 8,538,321 B2 (Ex. 1001, “the ’321 Patent”). We instituted a trial as to only claims 13, 34–37, 39, and 44. Paper 11 (“Inst. Dec.”), 23. After institution, IpLearn-Focus, LLC (“Patent Owner”) filed a Patent Owner Response (Paper 20, “PO Resp.”), to which Petitioner filed a Reply (Paper 23, “Pet. Reply”). The parties relied at trial on the following references, declarations, and deposition testimony: Reference Patent No./Title Date Exhibit Krueger US 4,843,568 June 27, 1989 Ex. 1008 Hutchinson Thomas E. Hutchinson et al., Human–Computer Interaction Using Eye-Gaze Input, 19:6 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS 1527 (Nov./Dec. 1989) Nov./Dec. 1989 Ex. 1004 Suenaga Yasuhito Suenaga et al., Human Reader: An Advanced Man- Machine Interface Based on Human Images and Speech, 24:2 SYSTEMS AND COMPUTERS IN JAPAN 88 (1993) 1993 Ex. 1007 Declaration of David Forsyth Ex. 1003 Declaration of David Crane Ex. 2007 IPR2015-00097 Patent 8,538,321 B2 3 Transcript of Deposition of David Crane Ex. 1030 Transcript of Deposition of David Forsyth Ex. 2008 The grounds for trial were as follows: References Basis Claims Challenged Hutchinson § 102(b) 13, 34, and 35 Suenaga § 102(b) 34–37 Hutchinson and Krueger § 103(a) 39 and 44 An oral hearing was held on December 17, 2015. The transcript of the oral hearing has been entered into the record. Paper 32 (“Tr.”). We have jurisdiction under 35 U.S.C. § 6(c). The evidentiary standard is a preponderance of the evidence. See 35 U.S.C. § 316(e); 37 C.F.R. § 42.1(d). This Final Written Decision is issued pursuant to 35 U.S.C. § 318(a) and 37 C.F.R. § 42.73. For the reasons explained below, we determine that Petitioner has shown by a preponderance of the evidence that claims 13, 34, 35, 39, and 44, but not claims 36 or 37, are unpatentable. B. Related Proceedings Petitioner and Patent Owner are parties in a federal district court case involving the ’321 Patent (IpLearn-Focus, LLC v. Microsoft Corp., No. 3:14-cv-00151-JD (N.D. Cal.)). Pet. 2; Paper 5, 1. They also are parties in an inter partes review involving a patent (U.S. Patent No. 8,475,174 B2, IPR2015-00097 Patent 8,538,321 B2 4 “the ’174 Patent”) that is related to the ’321 Patent. See IPR2015-00095, Paper 12; Ex. 1001, at (63). C. The ’321 Patent The ’321 Patent relates to a computer learning system including a detached sensor to monitor a student’s behavior. Ex. 1001, 1:20–22, 1:42– 2:7. In one embodiment, the system includes a presenter, a non-intrusive sensor, a controller, and an indicator. Id. at 1:64–65. In this embodiment, the presenter presents study materials to a student, the sensor automatically senses the student’s concentration-sensitive behavior, the controller analyzes the behavior based on one or more rules, and the indicator indicates the student’s concentration level based on the analysis. Id. at 1:64–2:6. In another embodiment, the system reacts according to the indication. Id. at 2:6–7. The Specification describes calibrating a student’s “concentration- sensitive behavior.” Id. at 2:45–47. “One type of calibration establishes the student’s behavior when the student is paying attention, and compares it with the student’s behavior when the student is working on the study materials.” Id. at 2:48–51. In an embodiment, sensor 110 includes digital camera 180, which is positioned adjacent to monitor 178 such that it can take pictures of the student’s face while looking at the monitor. Id. at 8:38–46. “To improve the performance of this embodiment, before the step of monitoring, the present invention includes the step of calibration through imaging.” Id. at 8:50–52. The Specification describes, as an example, asking the student to look at a IPR2015-00097 Patent 8,538,321 B2 5 message box displayed on the monitor so that the digital camera can take a reference image of the student’s face. Id. at 8:55–60. After calibration, digital camera 180 regularly captures the student’s facial image. Id. at 9:14– 16. According to the Specification, a rule for use with this embodiment is: “If the student’s facial orientation is significantly different from its reference image as shown in two consecutive monitoring processes, the student has lost concentration in the study materials.” Id. at 9:30–33. In another embodiment, study materials are presented in a “multi- windows environment,” with the student entering inputs into the system using, for example, a mouse or a keyboard. Id. at 8:11–14. A rule for use with this embodiment is that: “If for a predetermined period of time, the inputs have been entered outside the window where the study materials reside, the student has lost concentration in the study materials.” Id. at 8:23– 25. D. Illustrative Claims Claims 13 and 34 are independent. Dependent claims 35–37, 39, and 44 depend from claim 34. Claims 13 and 34 are illustrative and are reproduced below: 13. A computing device comprising: a display to show content at least during some of the time the device is used by a user; an imaging sensor to sense a first feature of the user regarding a first volitional behavior of the user to produce a first set of measurements, the imaging sensor detached from the first feature to sense the first feature, the first feature being related to an attribute of the head of the user, and the first IPR2015-00097 Patent 8,538,321 B2 6 set of measurements including an image of the first feature; and a processor coupled to the imaging sensor and the display, the processor to: analyze at least the first set of measurements; determine whether to change what is to be presented by the display in view of the analysis; change what is to be presented by the display at least in view of the determination; and resume what is to be presented by the display at least in view of a second set of measurements from sensing the user by the device. 34. A computing device comprising: a display; an imaging sensor, connected and positioned adjacent to, the display to sense a first feature of a user regarding a first volitional behavior of the user to produce a first set of measurements, the imaging sensor detached from the first feature to sense the first feature, the first feature being related to an attribute of the head of the user, and the first set of measurements including a plurality of images of the first feature; and a processor coupled to the imaging sensor and the display, the processor to analyze, relative to the display, at least the first set of measurements to identify whether the user is not paying attention to content presented by the display. Id. at 14:39–58, 15:61–16:8. IPR2015-00097 Patent 8,538,321 B2 7 II. ANALYSIS A. Claim Construction; Level of Skill in the Art In an inter partes review, the Board gives claim terms in an unexpired patent their broadest reasonable interpretation in light of the specification of the patent in which they appear. 37 C.F.R. § 42.100(b); see also In re Cuozzo Speed Techs., LLC, 793 F.3d 1268, 1278, 1279 (Fed. Cir. 2015) (“We conclude that Congress implicitly approved the broadest reasonable interpretation standard in enacting the AIA” and “the standard was properly adopted by PTO regulation.”), cert. granted sub nom. Cuozzo Speed Techs. LLC v. Lee, 136 S. Ct. 890 (mem.) (2016). Under that standard, a claim term generally is given its ordinary and customary meaning, as would be understood by one of ordinary skill in the art in the context of the entire disclosure. In re Translogic Tech., Inc., 504 F.3d 1249, 1257 (Fed. Cir. 2007). While our claim interpretation cannot be divorced from the specification and the record evidence, see Microsoft Corp. v. Proxyconn, Inc., 789 F.3d 1292, 1298 (Fed. Cir. 2015) (quoting In re NTP, Inc., 654 F.3d 1279, 1288 (Fed. Cir. 2011)), we must be careful not to import limitations from the specification that are not part of the claim language. See Superguide Corp. v. DirecTV Enterprises, Inc., 358 F.3d 870, 875 (Fed. Cir. 2004). Any special definition for a claim term must be set forth in the specification with reasonable clarity, deliberateness, and precision. In re Paulsen, 30 F.3d 1475, 1480 (Fed. Cir. 1994). The parties propose different definitions of a person of ordinary skill in the art (“POSITA”). According to Petitioner’s expert, Dr. Forsyth, a POSITA IPR2015-00097 Patent 8,538,321 B2 8 would have had a combination of experience and education in computer vision and the design of human-computer interfaces. This typically would consist of a minimum of a bachelor of science in Computer Science or Electrical Engineering (or a related engineering field) plus either a year of graduate training or 2-4 years of relevant experience. The POSITA also would have been familiar with the design of, theory behind, principles of operation of, intended use of, and the underlying technology used in computer vision and human-computer interfaces. Ex. 1003 ¶ 27. In contrast, Patent Owner’s expert, Mr. Crane, testifies that a POSITA “would have had a bachelor’s degree or equivalent in the field of computer engineering, computer science or electrical engineering and at least two to three years of experience relating to human-machine interface design.” Ex. 2007 ¶ 15. Based on the competing arguments and evidence of record, we largely accept Patent Owner’s proposed definition. The definition proposed by Petitioner appears to be overly narrow because of its focus on “computer vision.” We have made minor modifications to Patent Owner’s definition to clarify the educational requirement, and to broaden the experience requirement to include at least a year of graduate training as an alternative to work experience, consistent with the background of researchers in the field as indicated by prior art references such as Hutchinson. See Ex. 1004, 1533–1534. We determine that a POSITA would have had at least a bachelor’s degree in computer engineering, computer science, electrical engineering, or an equivalent field, and at least a year of graduate training, or two years of work experience, relating to human-machine interfaces. IPR2015-00097 Patent 8,538,321 B2 9 1. “content presented by the display” Claim 34 recites “content presented by the display” (emphasis added). Patent Owner argues that, consistent with the Specification and in ordinary usage, the term “content” does not encompass “the name of a key on the keyboard, or the name of a menu item, or the name of a functional key on the menu, such as ‘next page,’” even when those items are presented by a display. PO Resp. 27. Patent Owner, however, does not propose an express construction for the term “content.” In the Reply, Petitioner argues that the ordinary meaning of “content” is information, such as text, video, and sound. Pet. Reply 4 (citing Ex. 10331). We find that the ordinary meanings of “content” are “[s]omething contained, as in a receptacle: the contents of my desk drawer; the contents of an aerosol can,”2 and “[i]nformation, such as text, video, and sound, usually as contrasted with its format of presentation: a television producer looking for content that was more entertaining.”3 Although the Specification does not define or use the term “content,” it sheds light on the meaning of the term, by describing a calibration technique utilizing the text and graphics (the content) of a message box to direct the user’s attention and to take a reference image. See Ex. 1001, 8:55–60 (describing a preferred embodiment 1 Exhibit 1033 does not contain Petitioner’s asserted definition, although, as referenced below, we found essentially the same definition on-line. 2 Ex. 1033. 3 http://search.credoreference.com/content/entry/hmdictenglang/content_ 1/0?searchId=4da7a984-e49d-11e5-9bca-0e58d2201a4d&result=5 (last visited March 8, 2016). IPR2015-00097 Patent 8,538,321 B2 10 in which a reference image of the student’s face is taken when the student is looking at a message box containing the message “LOOK AT ME” and a picture of two eyes staring at the student; “the digital camera 180 takes a reference image of the student’s face, who typically looks at the two eyes”). Accordingly, we determine that the broadest reasonable construction consistent with the Specification of “content presented by the display” is “something contained within a portion of the display,” including, but not limited to, what is advocated for by Petitioner, i.e., “information, such as text, video, and sound.” 2. “not paying attention to content presented by the display” Claim 34 recites “a processor . . . to identify whether the user is not paying attention to content presented by the display” (emphasis added). In the Institution Decision, we interpreted the italicized negative limitation “not paying attention to content presented by the display.” Inst. Dec. 10–11. Although neither party has objected to our preliminary interpretation (see PO Resp. 6–7; Pet. Reply 2), we repeat our analysis below in view of its importance to this Final Written Decision. In the Petition, Petitioner asserts an implicit construction of this limitation in its arguments relating to Hutchinson. See Pet. 14–16. Hutchinson discloses a computer system that determines whether a user’s attention is directed to a particular menu box in order to select that box from multiple menu boxes presented by a display. E.g., Ex. 1004, 1530 (“Menu options appear in from one to nine of the menu boxes. The user makes a selection by staring at the desired option for a short period of time.”). In a IPR2015-00097 Patent 8,538,321 B2 11 text reading application, the text is displayed on the screen along with a bottom row comprising three menu boxes that are used to turn pages and to call-up a submenu. Id. at 1531. In light of the above, Petitioner argues implicitly that the negative limitation “not paying attention to content presented by the display” means not paying attention to some of the content presented by the display, such as the content presented in one or more discrete boxes or windows. Pet. 15–16 (“When a user’s attention is directed to a menu box, it is an indicator that he or she is not paying attention to other boxes or the window presenting text materials”) (citing Ex. 1003 ¶ 71). The Specification supports this interpretation. It includes an example where study materials are presented in a multi-windows environment, and teaches: “If for a predetermined period of time, the inputs have been entered outside the window where the study materials reside, the student has lost concentration in the study materials.” Ex. 1001, 8:23–25. We have considered the possibility that the negative limitation “not paying attention to content presented by the display” should be interpreted to mean not paying attention to all of the content presented by the display. As discussed above, the Specification discloses that a student’s loss of concentration can be measured by determining whether the student is no longer looking at a monitor, i.e., whether the student is not paying attention to all of the content displayed by the monitor. Id. at 8:38–9:33. We have determined, however, that this possible interpretation is not the broadest reasonable interpretation consistent with the Specification, because a student IPR2015-00097 Patent 8,538,321 B2 12 who is not paying attention to all of the content, necessarily, also is not paying attention to some of the content. We determine that the broadest reasonable interpretation consistent with the Specification of the negative limitation “not paying attention to content presented by the display” is not paying attention to some of the content presented by the display, such as the content presented in one or more discrete boxes or windows. 3. “a processor . . . to identify whether the user is not paying attention to content presented by the display” Neither party proposes an express construction for “a processor . . . to identify whether the user is not paying attention to content presented by the display,” recited in claim 34. In the Patent Owner Response, Patent Owner, however, asserts an implicit construction in its arguments relating to Suenaga. PO Resp. 35–38.4 Specifically, Patent Owner argues that “even if Suenaga’s selection of a menu item did disclose identifying paying attention in certain instances, ‘identify[ing] . . . not paying attention’ requires more than identifying paying attention or identifying information that a person might interpret to indicate not paying attention.” PO Resp. 38. Patent Owner further argues that the phrase requires the system, itself, to determine whether the user is not paying attention: 4 Similar to Hutchinson, Suenaga discloses a computer system that determines whether a user’s attention is directed to a particular menu box in order to select that box from multiple menu boxes presented by a display. Ex. 1007, 98–99, Fig. 7. IPR2015-00097 Patent 8,538,321 B2 13 [I]f “identify[ing] … not paying attention” is interpreted such that all that is needed is gathering information that a person might interpret to indicate not paying attention, without the step of actually identifying not paying attention, then the word “identify” in the claim is read out of the claim. Rather, “identify[ing] … not paying attention” requires an actual identification by the Suenaga system of not paying attention, which, as discussed above, is not disclosed or performed by Suenaga. Id. (emphasis added). In the Reply, Petitioner does not acknowledge or address Patent Owner’s argument that “identify[ing] . . . not paying attention” requires the system, itself, to determine whether the user is not paying attention. Compare Pet. Reply 8, with PO Resp. 38. Indeed, Petitioner’s reply arguments largely ignore the “identify” or “identify[ing]” requirement. For example, Petitioner argues: If a user is paying attention to some text content on the screen to make a selection, they necessarily are not paying attention to other content. This is an indicator of a user’s attention. . . . That is all the claim requires. Pet. Reply 8 (citing Ex. 1003 ¶ 139 and Ex. 2008, 165:20–166:3). Patent Owner’s implicit claim construction is consistent with the Specification, which describes systems and devices that determine, themselves, whether the user is not paying attention to content presented by the display. For example, one embodiment determines whether the user is not paying attention (i.e., has lost concentration) by comparing images of a student’s facial orientation to a reference image indicative of the user’s facial orientation when paying attention (i.e., looking at the monitor). See Ex. 1001, 9:10–9:33. IPR2015-00097 Patent 8,538,321 B2 14 We determine that the broadest reasonable interpretation consistent with the Specification of “a processor . . . to identify whether the user is not paying attention to content presented by the display” requires the processor, itself, to determine whether the user is not paying attention to content presented by the display. 4. “resume” Claim 13 recites the phrase “resume what is to be presented by the display” (emphasis added). In the Institution Decision, we determined that the broadest reasonable interpretation consistent with the Specification of the term “resume” is “return to.” Inst. Dec. 13. Neither of the parties proposes any change to that interpretation, and our review of the evidence does not indicate that any change is necessary. Consequently, we maintain our previous interpretation. B. Asserted Anticipation of Claims 13, 34, and 35 by Hutchinson To anticipate a patent claim under 35 U.S.C. § 102, “a single prior art reference must expressly or inherently disclose each claim limitation.” Finisar Corp. v. DirecTV Group, Inc., 523 F.3d 1323, 1334 (Fed. Cir. 2008). Under the principles of inherency, if the prior art necessarily functions in accordance with, or includes, the claimed limitations, it anticipates, even though artisans of ordinary skill may not have recognized the inherent characteristics or functioning of the prior art. MEHL/Biophile Int’l Corp. v. Milgraum, 192 F.3d 1362, 1365 (Fed. Cir. 1999) (citation omitted); In re Cruciferous Sprout Litig., 301 F.3d 1343, 1349–50 (Fed. Cir. 2002). IPR2015-00097 Patent 8,538,321 B2 15 Petitioner challenges claims 13, 34, and 35 as anticipated by Hutchinson. Pet. 11–16, 30–37. As discussed below, we are persuaded that Hutchinson anticipates claims 13, 34, and 35. 1. Overview of Hutchinson Hutchinson discloses a computer system that determines whether a user’s attention is directed to a particular menu box in order to select that box from multiple menu boxes presented by a display. E.g., Ex. 1004, 1527–1530. Figure 3(b) of Hutchinson is reproduced below. Figure 3(b) is a schematic that depicts a device called the eye-gaze- response interface computer aid (“Erica”), and shows how eye gaze operates. Id. at 1527, 1529. A principal goal of Erica is to help the physically and vocally disabled, including quadriplegics. Id. at 1527. Hutchinson discloses: To operate Erica, the user must maintain his or her head in a nearly stationary position. Lateral head movements greater than two inches in either direction cause the eye image to leave the camera field; movements greater than a few inches toward or IPR2015-00097 Patent 8,538,321 B2 16 away from the camera put the eye image out of focus. Sadly, this i[s] not a problem for many of the target population. Patients who suffer from cerebral palsy and similar disorders, however, have uncontrolled head movements that currently inhibit their use of the system. Hardware alternatives that will make Erica available to this population, such as head-tracking systems and autofocus lenses, are under evaluation. Id. at 1532. Hutchinson discloses that staring at one of the commands, or menu options, displayed on the computer screen for a period of time triggers the system. Id. at 1529, 1530. Hutchinson further discloses that “[w]hen the user’s eye-gaze is fixed for this period, a tone sounds and an icon (cursor) appears in the menu box in line with the gaze.” Id. at 1530. If the user continues to stare at the command or menu option after the tone sounds and the icon appears, a second tone sounds and the selected command or option is performed. Id. Hutchinson discloses that “[t]he purpose of the auditory and visual feedback is to allow the user a moment to change or abort the enabled option by altering his or her gaze accordingly.” Id. (emphasis added). Hutchinson also discloses that “[m]ost menus contain a ‘back-up’ option, which permits the user to return to the previous menu if desired.” Id. 2. Analysis―Claims 13, 34, and 35 Upon review of the competing arguments and evidence presented by the parties, we determine that Petitioner has shown by a preponderance of the evidence that Hutchinson anticipates claims 13, 34, and 35. We are persuaded that Petitioner―through argument, claim charts, and the testimony of Dr. Forsyth―has shown sufficiently that Hutchinson discloses IPR2015-00097 Patent 8,538,321 B2 17 each limitation of claims 13, 34, and 35, for the reasons set forth below. See Pet. 11–16, 30–37; Ex. 1003 ¶¶ 41–72. For example, independent claims 13 and 34 recite similar “display” and “imaging sensor” limitations. We agree that Hutchinson’s computer monitor meets the “display” requirements of the independent claims. See Pet. 30–36; Ex. 1003 ¶¶ 47, 54, 69; Ex. 1004, 1527, Fig. 3(b). We also agree that Hutchinson discloses “an imaging sensor” (detached infrared camera) to sense “a first feature of a user” (the direction of the user’s eye- gaze) to produce “a first set of measurements” (the relative positions of the “glint”5 and “bright-eye”6 in the camera images), as required by the independent claims. See Pet 30–33; Ex. 1003 ¶¶ 48, 56; Ex. 1004, 1527, 1528, 1530, Figs. 3(b), 5. a. Claim 13 With respect to the “processor” limitation of claim 13, we agree with Petitioner that Hutchinson discloses a processor that performs the “analyze,” “determine,” “change,” and “resume” requirements, as recited. See Pet. 31– 33; Ex. 1003 ¶¶ 49, 50, 57–61; Ex. 1004, 1527, 1530, 1531. Regarding the “resume” requirement, i.e., “resume what is to be presented by the display at 5 “A fraction of the infrared light is reflected off the corneal surface. This is the first Purkinje image of the LED and appears in the camera as a small intense area of infrared light, called the glint.” Ex. 1004, 1528. 6 “A fraction of the infrared light enters the pupil and is reflected off the retina. This is the image of the pupil, called the bright eye (a reflection of infrared light from the human retina, similar to the reflection of visible light from a cat's eye at night). The bright-eye appears in the camera as an area of infrared light, larger and less intense than the glint, but more intense than the ‘dark’ image of the surrounding iris.” Id. IPR2015-00097 Patent 8,538,321 B2 18 least in view of a second set of measurements from sensing the user by the device,” we are persuaded by Petitioner and Dr. Forsyth that Hutchinson discloses interfaces that allow the user to return to what was previously being presented by the display, as required by our interpretation of “resume.” See Pet. 16 (citing Ex. 1003 ¶ 61); supra Section II.A.4. We credit Dr. Forsyth’s testimony as follows: First, Hutchinson discloses an interface where the user is able to “back-up” to what was previously being shown. Hutchinson discloses that the system has auditory and visual feedback that allows a user a moment to change or abort an enabled menu command. Additionally, “[m]ost menus contain a ‘backup’ option, which permits the user to return to the previous menu if desired.” ([Ex. 1004, 1530].) A POSITA would have understood that these are ways to resume what is being presented on the display in view of the sensed data on the user’s attention. Second, as discussed above, Hutchinson discloses how the user’s gaze is sensed and analyzed in order to adjust what is being presented on the display. On page 1531 of Hutchinson, in describing a text reading application, Hutchinson discloses “Two boxes are used to turn pages, backwards and forwards, and the third to call-up a submenu. Options of the submenu allow the user to place a bookmark on the current page, select an alternate text, and exit the application.” Therefore, a reader can pause reading text materials then resume by selecting forwards. Similarly, a reader can exit the application, then resume reading by re-starting the application. Ex. 1003 ¶ 61. Patent Owner’s opposing arguments are not persuasive. Patent Owner acknowledges that Hutchinson allows the user to select “next page” (based on a first set of measurements) and, subsequently, to select “previous page” (based on a second set of measurements) but contends that the “previous page” command does not meet the “resume” requirement because the user IPR2015-00097 Patent 8,538,321 B2 19 must select the command purposely and in a particular order relative to other commands. PO Resp. 29. Specifically, Patent Owner asserts that the success of commands returning to previous content, such as the “previous page” command, presumes that there is previous content to return to, and that such speculation is impermissible in making a finding of anticipation. This argument is unpersuasive because it is not commensurate with the scope of the claim or what one of ordinary skill would understand from the prior art. Contrary to Patent Owner’s arguments, the “resume” requirement encompasses returning to what was previously being presented by the display in view of a second set of measurements indicating, for example, that the direction of the user’s eye gaze is in line with the “previous page” command in Hutchinson. We are persuaded that one of ordinary skill would understand that Hutchinson contemplates the existence of that previous content when executing the “previous page” command. b. Claims 34 and 35 As to claims 34 and 35, we agree that Hutchinson discloses “a processor coupled to the imaging sensor and the display, the processor to analyze, relative to the display, at least the first set of measurements to identify whether the user is not paying attention to content presented by the display.” Specifically, as disclosed in Hutchinson, after the user gazes for a period of time at a menu box on the display, a tone sounds and an icon (cursor) appears in the menu box in line with the gaze, but the option associated with the menu box is aborted if the user does not continue to gaze at the box for an additional period of time. Ex. 1004, 1530; see Pet. 36 (citing Ex. 1003 ¶ 71). Hutchinson’s processor meets the claim requirement IPR2015-00097 Patent 8,538,321 B2 20 for “a processor . . . to identify whether the user is not paying attention to content presented by the display” because, to implement the abort mechanism, the processor determines, itself, whether the user is not directing his or her gaze at the icon contained in the menu box, in accordance with our claim interpretation. See Ex. 1004, 1530; supra Section II.A.3. Patent Owner’s opposing arguments with respect to claims 34 and 35 are not persuasive. PO Resp. 16–28. Patent Owner argues: Hutchinson does not “identify whether the user is not paying attention to content presented by the display” because (1) Hutchinson does not expressly or inherently identify whether the user is or is not paying attention to content presented by the display, and (2) a button that says “next” or “back” is not “content presented by the display,” as claimed. Id. at 17–18 (citing Ex. 2007 ¶¶ 35–42). Additionally, relying on asserted “admissions” of Dr. Forsyth on cross-examination, Patent Owner argues that Hutchinson is not programmed to “identify” whether the user is or is not paying attention. Id. at 18–21 (citing Ex. 2008, 53:15–24, 56:18–57:9, 58:3–22, 61:19–62:1, 66:15–21). Patent Owner also argues: [T]he Petition and the Forsyth Declaration confirm that Hutchinson does not expressly identify “not paying attention to content presented by the display” because they do not assert that Hutchinson in fact identifies “not paying attention.” Rather, both the Petition and its supporting declaration only assert that Hutchinson identifies “paying attention,” and that “[w]hen a user’s attention is directed to a menu box, it is an indicator that he or she is not paying attention to other boxes or the window presenting text materials.” Pet. 15-16; see also Pet. 36 (“Hutchinson discloses identifying whether the user is paying attention to content); Ex. 1003 at ¶ 71 (“When a user’s eye gaze is on a control menu box, then it is an indicator that the user is IPR2015-00097 Patent 8,538,321 B2 21 not paying attention to the window presenting the text materials.”)[.] Id. at 21–22. According to Patent Owner, “a user’s eye gaze is not necessarily a proxy for attention and does not necessarily identify whether the user is not paying attention to content.” Id. at 22 (citing Ex. 2007 ¶¶ 36–37). Patent Owner further argues: This is because in Hutchinson, the user uses his or her eye gaze to control the computer. The user may type, makes selections, and/or turns the pages of a book, all solely through the movement of his or her eyes. Thus the user of Hutchinson may need to look away from content the user is actually paying attention to in order to perform a command. In this context, the user’s eye gaze has nothing to do with what he or she is or is not paying attention to. Id. at 22–23. The above arguments of Patent Owner do not address Hutchinson’s abort mechanism. Contrary to Patent Owner’s arguments, we find that the abort mechanism is programmed to identify whether the user is not paying attention to content presented by the display, for the reasons discussed above. Accordingly, we do not agree with Patent Owner that “the user’s eye gaze has nothing to do with what he or she is or is not paying attention to.” See id. Patent Owner also argues that “[c]hoosing to make a selection or choosing to abort a selection does not identify whether the user is not paying attention to the selected item or to the content presented by the display.” Id. at 25 (citing Ex. 2007 ¶ 25). Patent Owner argues that “[a]borting, for example, a ‘next page’ command by directing the user’s eye gaze away from the box does not indicate the user is not paying attention to that command IPR2015-00097 Patent 8,538,321 B2 22 box or to the box containing the text of the book.” Id. at 25–26. These arguments also are unpersuasive. Hutchinson’s processor, when implementing the abort mechanism, determines, itself, whether the user is not directing his or her gaze at the icon contained in the menu box. Not directing his or her gaze at the icon contained in the menu box corresponds to “not paying attention to content presented by the display,” as recited in the claim. Whether the user of Hutchinson’s device is thinking about the icon when not directing his or her gaze at the icon is irrelevant, because eye gaze direction is the sole feature used by Hutchinson’s processor to determine whether the user is not paying attention to the icon. Indeed, Patent Owner’s contention would appear to confer mind-reading abilities on a computer that are completely divorced from physical stimuli. We are unpersuaded such a position is logical. Finally, Patent Owner argues that the name of a menu box is not “content presented by the display.” This argument is unpersuasive because it relies on an erroneous claim construction. The icon contained in the menu box of Hutchinson’s display corresponds to “content presented by the display” under a proper claim construction. See supra Section II.A.1. c. Conclusion For the reasons given, we conclude that Petitioner has shown by a preponderance of the evidence that Hutchinson anticipates claims 13, 34, and 35. IPR2015-00097 Patent 8,538,321 B2 23 C. Asserted Anticipation of Claims 34–37 by Suenaga Petitioner challenges claims 34–37 as anticipated by Suenaga. Pet. 22–24, 26–27, 56–58. As discussed below, we are not persuaded that Suenaga anticipates claims 34–37. 1. Overview of Suenaga Petitioner directs our attention to Suenaga’s Head Reader locator or selector device. See Pet. 56 (citing Ex. 1007, 94). According to Suenaga, “head movement as a medium of nonverbal communication involves a locator function for focus or interest, as well as a button selector-type function for the sending of simple symbolic messages such as yes or no.” Ex. 1007, 93. Suenaga discloses detecting “up,” “down,” “left,” “right,” “yes,” and “no” selector messages. Id. In an experiment, “[t]he screen of a 21-inch monitor . . . was divided into 2 x 2 windows; and when a cursor display was added as a feedback system, it was possible to use a 6 x 4 window display.” Id. at 94. 2. Analysis―Claims 34–37 Independent claims 34–37 require “a processor . . . to identify whether the user is not paying attention to content presented by the display.” In the Petition, Petitioner asserts that Suenaga “discloses determining a user’s ‘focus or interest,’ and can determine the direction a user’s head is facing”; Petitioner argues that “[t]hese are indicators of a user’s attention.” Pet. 26– 27; see id. at 57 (citing Ex. 1007, 93 and Ex. 1003 ¶ 139). In the Reply, Petitioner argues that if a user is paying attention to some text content on the IPR2015-00097 Patent 8,538,321 B2 24 screen to make a selection, the user necessarily is not paying attention to other content on the screen: If a user is paying attention to some text content on the screen to make a selection, they necessarily are not paying attention to other content. This is an indicator of a user’s attention. . . . That is all the claim requires. Pet. Reply 8 (citing Ex. 1003 ¶ 139 and Ex. 2008, 165:20–166:3). Petitioner’s arguments do not persuade us that Suenaga’s processor, itself, determines whether the user is not paying attention to content presented by the display, as required under our claim interpretation. See supra Section II.A.3. We agree with Patent Owner that “‘identify[ing] . . . not paying attention’ requires an actual identification by the Suenaga system of not paying attention, which, . . . is not disclosed or performed by Suenaga.” PO Resp. 38. For the reasons given, we conclude that Petitioner has not shown by a preponderance of the evidence that Suenaga anticipates claims 34–37. D. Asserted Obviousness of Claims 39 and 44 over Hutchinson and Krueger A claim is unpatentable for obviousness under 35 U.S.C. § 103(a) if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which the subject matter pertains. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). A patent claim composed of several elements, however, is not proved obvious merely by demonstrating that each of its IPR2015-00097 Patent 8,538,321 B2 25 elements was known, independently, in the prior art. Id. at 418. In analyzing the obviousness of a combination of prior art elements, it can be important to identify a reason that would have prompted one of skill in the art to combine the elements in the way the claimed invention does. Id. A precise teaching directed to the specific subject matter of a challenged claim is not necessary to establish obviousness. Id. Rather, “any need or problem known in the field of endeavor at the time of invention and addressed by the patent can provide a reason for combining the elements in the manner claimed.” Id. at 420. The question of obviousness is resolved on the basis of underlying factual determinations, including: (1) the scope and content of the prior art; (2) any differences between the claimed subject matter and the prior art; (3) the level of skill in the art; and (4) objective evidence of nonobviousness, i.e., secondary considerations, if in evidence. See Graham v. John Deere Co., 383 U.S. 1, 17–18 (1966). Petitioner challenges claims 39 and 44 as obvious over Hutchinson and Krueger. As discussed below, we are persuaded that the combination of Hutchinson and Krueger renders obvious claims 39 and 44. 1. Overview of Krueger Krueger discloses “VIDEOTOUCH,” a coined term that means “(a) using a perceived [i]mage of the human body to control real time computer events, and further (b) expressing a response to the perceived image by any means which may be controlled by the computer.” Ex. 1008, 1:7–12. According to Krueger, conventional approaches to tracking human movement suffered from two problems. Id. at 2:44–3:44. First, the IPR2015-00097 Patent 8,538,321 B2 26 approaches were intrusive, in that they relied on markers or signal sources that had to be worn or held by the person whose movements were being tracked. Id. at 3:30–33. Second, analysis of the user’s motion did not take place in real time. Id. at 3:34–35. Krueger asserts that, in contrast with the conventional approaches, its VIDEOTOUCH system performs perceptual analysis of the user’s image, identifying features of interest such as the head, hands, and fingers, in real time, and without using markers. Id. at 4:5–10. Krueger discloses that “VIDEOTOUCH can be used in place of traditional input devices for a number of different applications, including menuing, parameter input, low speed typing, drawing, painting, and manipulation of graphic objects.” Id. at 5:56–59. In an applications for the disabled, “VIDEOTOUCH allows any movement of a person’s body to be used as a control input to the computer.” Id. at 7:23–25. “A severely disabled person might use an individually tailored VIDEOTOUCH system to control his environment via his attainable range of motions.” Id. at 7:25–27. In Krueger’s VIDEOTOUCH system, a video camera is coupled to a computer processor that analyzes the camera images. Id. at 11:11–14, 27–30. The results of processing are displayed to the user via, for example, a desk top monitor. Id. at 11:14–20, 30–35, Figs. 1, 2. As explained by Dr. Forsyth, “[t]he system processes an image from the camera 30 times per second . . . , filtering the image to represent a silhouette of the user.” Ex. 1003 ¶ 77 (citing Ex. 1008, 14:32–59, 15:7–23). According to Dr. Forsyth, “[t]he system analyzes this image, identifying features of interest on the user such as the head, hands, fingers, without relying upon markers affixed to the body of the subject and in real time.” Id. ¶ 78 (citing IPR2015-00097 Patent 8,538,321 B2 27 Ex. 1008, 4:5–10, 25:36–49). In an embodiment, a participant can draw a picture on a screen by moving a fingertip through the air in view of the video camera. Ex. 1008, 34:66–68. Figure 11 of Krueger is reproduced below. Figure 11 illustrates the type of information extracted from a silhouette image by Krueger’s outline processor. Ex. 1008, 18:65–66. Krueger discloses recording the left-side and right-side outline points, and calculating the width, for each scan line. Id. at 19:1–9. Element 77 in Figure 11 represents the width for one particular scan line. Id. at 19:8–9. IPR2015-00097 Patent 8,538,321 B2 28 2. Analysis―Claim 39 Claim 39 recites “[a] computing device as recited in claim 34, wherein to analyze at least the first set of measurements includes to identify a width of the head.” Petitioner contends that Krueger’s system “can be used to calculate the width of the head,” and that “[m]ultiple reasons would have prompted a POSITA to combine techniques from [Hutchinson and Krueger], for example, because they are directed to the same field of human-computer interfaces, and describe similar systems.” Pet. 19 (citing Ex. 1003 ¶¶ 90– 91). Petitioner further contends: Combining Krueger’s teachings regarding a camera sensing multiple features of the user’s body to control a display, with Hutchinson’s system of a camera monitoring the user’s eyes to control a computer, would have been natural for a POSITA because using features of human behavior to control input to a computer was well-known by a POSITA. See Ex. MS1001 at 10:22-24 . . . Krueger recognizes that an individual could benefit by tailoring control of the computer to the individual’s range of motions, including head, hand, and finger behavior. Krueger at 7:13-28. Hutchinson realizes a similar benefit in operating a computer by using movement related to the head – eye movement. Hutchinson at 1533. In both references, the user’s movements are captured in a similar manner by using an imaging sensor detached from the user. A POSITA would have known to use these established techniques (e.g., imaging various human features, see Krueger at 5:45-7:57) to improve human-computer interfaces. Id. at 19–20 (citing Ex. 1003 ¶¶ 79–80). In addition, Petitioner contends that “a POSITA would have been able to combine [the references’ techniques] in a functioning system because other contemporary publications combined similar techniques. Id. at 20 (citing Ex. 1003 ¶ 79). IPR2015-00097 Patent 8,538,321 B2 29 In the Declaration filed with the Petition, Dr. Forsyth provides testimony in support of Petitioner’s rationale for combining Krueger and Hutchinson. Ex. 1003 ¶ 79. For example, Dr. Forsyth testifies that “using techniques and components from different systems to build an improved system was common for a POSITA,” as evidenced by “a contemporary publication [Epworth7] [that] cited a paper by Krueger and his co-inventors, as well as papers about eye tracking as a computer input.” Ex. 1003 ¶ 79. The Epworth publication noted by Dr. Forsyth in his Declaration discloses, for example, correcting point-of-gaze measurements using a head- movement sensor in order to obtain “a record of the absolute point-of-gaze versus time.” Ex. 1026, 392 (emphasis added). Patent Owner questioned Dr. Forsyth at length during his deposition regarding the statement in Hutchinson that “head-tracking systems . . . are under evaluation.” Ex. 2008, 93:16–102:1, 108:21–109:7, 109:22–128:17; Ex. 1004, 1532. Petitioner relied on Dr. Forsyth’s deposition testimony in the Reply, arguing that Hutchinson contemplates using a head-tracking system as taught by Krueger, in combination with Hutchinson’s eye-gaze system, to track the user’s eye gaze even when the user’s head moves. See Pet. Reply 14 (citing Ex. 1004, 1532 and Ex. 2008, 110:20–112:5). We agree with Petitioner and Dr. Forsyth that a POSITA would have been motivated to combine the teachings of Hutchinson and Krueger. As Petitioner argues, Hutchinson expressly contemplates using Erica with 7 Richard Epworth, Eye Movements, for a Bidirectional Human Interface, ICL TECHNICAL JOURNAL 384–411 (1990) (Ex. 1026). IPR2015-00097 Patent 8,538,321 B2 30 “hardware alternatives . . . such as head-tracking systems.” Ex. 1004, 1532 (emphasis added); see Pet. Reply 13 (citing Ex. 1004, 1532 and Ex. 2008, 108:21–109:7). Krueger not only discloses head-tracking, but touts its VIDEOTOUCH system as an improvement that identifies human-body features such as the head in real time, and without use of intrusive markers. Ex. 1008, 2:44–3:44, 4:5–10. We credit Dr. Forsyth’s deposition testimony explaining how Hutchinson’s eye-gaze system would have worked in combination with a head-tracking system. Ex. 2008, 93:16–102:1, 108:21– 109:7, 109:22–128:17. For example, we credit the following testimony: Q. Hutchinson cannot -- will have to recalibrate if the user moves their head more than two inches; correct? A. Hutchinson, as described, will have to recalibrate. Hutchinson also says head-tracking is under evaluation and this will alleviate that problem. Column 2, second paragraph. Q. But this is not -- what you're talking about, this possible new design to accommodate users who have cerebral palsy, that has not been developed as of the time of Hutchinson; correct? A. Hutchinson doesn’t describe an implementation of it, but a reasonably accurate head-tracker was available. Hutchinson is ’89. It would be fairly intrusive, but, yes, you would have a head-tracker at the time. This is not pure speculation on Hutchinson’s part. . . . Q. And the function of a head-tracking system if it were used with Hutchinson would be to continue to track the user’s eye gaze if their head moves; is that correct? A. Yes. IPR2015-00097 Patent 8,538,321 B2 31 Ex. 2008, 94:22–95:13, 110:20–24. We also credit the testimony of Dr. Forsyth that Krueger analyzes and identifies (i.e., tracks) a user’s head in real time. Ex. 1003 ¶ 78 (citing Ex. 1008, 4:5–10, 25:36–49). Further, we credit Dr. Forsyth’s testimony that “[a] POSITA would have known that Krueger's system was capable of measuring the width of the head, and would do so for situations similar to that of Figure 11 but when the arms were not raised.” Ex. 1003 ¶ 91. In opposition, Patent Owner asserts that Petitioner has failed to provide an adequate reason to combine Hutchinson and Krueger. See PO Resp. 54–57. We are not persuaded by Patent Owner’s argument because it ignores the expressly-identified suggestion in Hutchinson to use an eye-gaze system with a head-tracking system, such as disclosed in Krueger. See Ex. 1004, 1532; Ex. 1008, 2:44–3:44, 4:5–10; Pet. 38–40, 42; Pet. Reply 13–14. Patent Owner also asserts that the combination does not teach “to analyze at least the first set of measurements includes to identify a width of the head,” as recited by claim 39. PO Resp. 57–59. According to Patent Owner, “the features that the Petitioner proposes to implement in Hutchinson, such as the width of the head, have no use in Hutchinson’s system.” Id. at 54. Patent Owner has not persuaded us that identifying the width of the user’s head would have no use in the combination of Hutchinson and Krueger. Patent Owner’s arguments improperly focus on individual teachings of the references, particularly Hutchinson, rather than the teachings of the combined references. See, e.g., PO Resp. 58 (arguing that IPR2015-00097 Patent 8,538,321 B2 32 “Hutchinson does not care about any features of the user other than the user’s eyes, which Hutchinson directly identifies and calibrates through its near infrared camera”). We agree with Petitioner that “Hutchinson expressly contemplates using a ‘head-tracking system’ that would allow the system to continue to track the user’s eye gaze even if the head moves, a place where Krueger’s head identification system would be useful.” Pet. Reply 14 (citing Ex. 1004, 1532; Ex. 2008, 110:20–112:5). Further, we agree with Petitioner that the combination of Hutchinson and Krueger teaches analyzing at least the first set of measurements to identify a width of the head, as required by claim 39. As explained by Dr. Forsyth, VIDEOTOUCH identifies the width of each scan line, and would identify the width of the head of the user when his or her arms are not raised. Ex. 1003 ¶ 91. Finally, we are not persuaded by Patent Owner’s argument that “Hutchinson does not work with the camera of Krueger.” PO Resp. 55. Patent Owner provides no evidence and little technical reasoning to support that argument. We credit the contradicting testimony of Dr. Forsyth that the same type of near-infrared camera used in Hutchinson also could “function as a head-tracker.” Ex. 2008, 128:10–17. Krueger, in fact, expressly discloses using infrared imaging as an alternative to imaging via a standard video camera that is sensitive to visible light. Ex. 1008, 8:1–5. For the reasons given, we conclude that Petitioner has shown by a preponderance of the evidence that claim 39 would have been obvious over Hutchinson and Krueger. IPR2015-00097 Patent 8,538,321 B2 33 3. Analysis―Claim 44 Claim 44 recites: 44. A computing device as recited in claim 34, wherein the imaging sensor capable to sense another feature of the user regarding another volitional behavior of the user to produce another set of measurements, the another feature being different from the first feature and the another feature being related to an attribute of at least one eye of the user, and wherein the processor capable to determine, based at least on the first set of measurements and the another set of measurements, what is to be presented by the display. The limitations of dependent claim 44, including the incorporated limitations from independent claim 34, can be divided into two parts―a first part relating to the sensor and a second part relating to the processor. First, by its dependency from claim 34, the imaging sensor of claim 44 must “sense a first feature of a user regarding a first volitional behavior of the user to produce a first set of measurements, . . . the first feature being related to an attribute of the head of the user” (emphasis added). Claim 44 additionally requires “the imaging sensor capable to sense another feature of the user regarding another volitional behavior of the user to produce another set of measurements, the another feature being different from the first feature and the another feature being related to an attribute of at least one eye of the user” (emphasis added). Second, by its dependency from claim 34, the processor of claim 44 must analyze “at least the first set of measurements to identify whether the IPR2015-00097 Patent 8,538,321 B2 34 user is not paying attention to content presented by the display” (emphasis added). The term “at least” makes clear that the processor may analyze other data in addition to the first set of measurements. The processor of claim 44 must determine, additionally, “based at least on the first set of measurements and the another set of measurements, what is to be presented by the display.” Patent Owner argues that claim 34, as incorporated in claim 44, requires “a first feature that is used ‘to identify whether the user is not paying attention to content presented by the display.’” PO Resp. 59. Patent Owner further argues that, even if Hutchinson discloses a “second feature (eye),” Petitioner has not alleged or shown that Krueger discloses a first feature that is used to identify whether the user is not paying attention to content presented by the display, as claim 34 requires. Id. We disagree with Patent Owner’s arguments for several reasons. As a threshold matter, Patent Owner’s argument overlooks “at least” in the recitation “to analyze . . . at least the first set of measurements to identify whether the user is not paying attention to content presented by the display” (emphasis added). As discussed above, the term “at least” makes clear that the processor may analyze other data in addition to the first set of measurements. Of pertinence here, claim 44 encompasses a processor that analyzes both the first set of measurements (related to a first feature of the user) and the another set of measurements (related to another feature of the user) to identify whether the user is not paying attention to content presented by the display. IPR2015-00097 Patent 8,538,321 B2 35 We incorporate herein our discussions of claims 34 and 39. See supra Sections II.B.2.b and II.D.2. As we discussed in connection with claim 39, Hutchinson expressly contemplates using a head-tracking system, such as taught by Krueger, in combination with Hutchinson’s eye-gaze system in order to track the user’s eye gaze even when the user’s head moves. See Pet. Reply 14 (citing Ex. 1004, 1532 and Ex. 2008, 110:20–112:5). Accordingly, either in combination with Krueger or standing alone, we are persuaded that Hutchinson teaches using two sets of measurements (one related to the head and the other relating more particularly to the eye) that meet the requirements of claim 44. Specifically, we are persuaded that both sets of measurements would have been used in Hutchinson’s system, as modified to include head-tracking, in order to analyze whether the user’s eye gaze remains on the icon of Hutchinson’s abort mechanism, and thereby to determine both whether the user is not paying attention to the icon (content presented by the display) and whether to abort the triggered menu option (what is to be presented by the display). See supra Section II.B.2.b; Ex. 1004, 1532. Accordingly, either the combination of Hutchinson and Krueger, or Hutchinson standing alone, renders obvious claim 44. Patent Owner’s argument challenging Petitioner’s rationale for combining Hutchinson and Krueger is unpersuasive. See PO Resp. 59–60. We are not persuaded by Patent Owner’s argument because it ignores the express suggestion in Hutchinson to use an eye-gaze system with a head- tracking system such as disclosed in Krueger. See Ex. 1004, 1532; Ex. 1008, 2:44–3:44, 4:5–10; Pet. 38–40, 42; Pet. Reply 13–14. Further, IPR2015-00097 Patent 8,538,321 B2 36 Hutchinson standing alone renders obvious claim 44, for the reasons set forth above. Patent Owner’s argument that “[c]laim 44 requires that the two features be sensed by the same imaging sensor” is unpersuasive because Patent Owner has not provided any claim construction or analysis to establish that the term “an imaging sensor” should be limited in that fashion. See PO Resp. 60; KCJ Corp. v. Kinetic Concepts, 223 F.3d 1351, 1356 (Fed. Cir. 2000) (“Unless the claim is specific as to the number of elements, the article ‘a’ receives a singular interpretation only in rare circumstances when the patentee evinces a clear intent to so limit the article.”). Further, Patent Owner has not persuaded us that the same sensor could not be used for sensing eye-gaze direction and head movement. Petitioner argues persuasively to the contrary. See Pet. Reply 15. Dr. Forsyth’s deposition testimony supports Petitioner’s argument. See Ex. 2008, 128:10–17. For the reasons given, we conclude that Petitioner has shown by a preponderance of the evidence that claim 44 would have been obvious over Hutchinson and Krueger. III. CONCLUSION For the foregoing reasons, we determine that Petitioner has shown by a preponderance of the evidence that claims 13, 34, 35, 39, and 44 are unpatentable, but has not shown by a preponderance of the evidence that claims 36 and 37 are unpatentable . IPR2015-00097 Patent 8,538,321 B2 37 IV. ORDER In view of the foregoing, it is hereby: ORDERED that claims 13, 34, 35, 39, and 44 of U.S. Patent No. 8,538,321 B2 are unpatentable. This is a Final Written Decision. Parties to the proceeding seeking judicial review of the decision must comply with the notice and service requirements of 37 C.F.R. § 90.2. IPR2015-00097 Patent 8,538,321 B2 38 PETITIONER: Jonathan Lamberson David B. Conrad John C. Phillips FISH & RICHARDSON P.C. lamberson@fr.com conrad@fr.com phillips@fr.com PATENT OWNER: Kenneth J. Weatherwax Nathan Lowenstein Parham Hendifar GOLDBERG, LOWENSTEIN & WEATHERWAX LLP weatherwax@lowensteinweatherwax.com lowenstein@lowensteinweatherwax.com hendifar@lowensteinweatherwax.com Copy with citationCopy as parenthetical citation