APPLE INC.Download PDFPatent Trials and Appeals BoardJul 1, 202013743989 - (D) (P.T.A.B. Jul. 1, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/743,989 01/17/2013 Gregory T. Lydon 18962-0695001/ P14669US1 9683 147271 7590 07/01/2020 APPLE/JWMH 7501 Village Square Drive Unit 206 Castle Pines, CO 80108 EXAMINER BEJCEK II, ROBERT H ART UNIT PAPER NUMBER 2123 NOTIFICATION DATE DELIVERY MODE 07/01/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Apple@jwmhlaw.com eofficeaction@appcoll.com howard.hamilton@jwmhlaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte GREGORY T. LYDON and SYLVAIN RENÉ YVES LOUBOUTIN Appeal 2019-000919 Application 13/743,989 Technology Center 2100 Before ALLEN R. MacDONALD, DEBRA K. STEPHENS, and JAMES B. ARPIN, Administrative Patent Judges. STEPHENS, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1–11, 21–27 and 31–37 (see Final Act. 2–12). We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Apple Inc. (Appeal Br. 1). Appeal 2019-000919 Application 13/743,989 2 CLAIMED SUBJECT MATTER The claims are directed to generating notifications upon detecting unusual user behavior (Spec. ¶ 1). Claim 1, reproduced below, is illustrative of the claimed subject matter: 1. A method for determining behavior associated with a user device, comprising: receiving first sensor data, the first sensor data including readings from one or more sensors configured to measure motion of the user device and to measure an environment in which the motion is performed; creating behavior clusters from the first sensor data, each behavior cluster being associated with a respective behavior type, a respective set of one or more representative readings, and a respective magnitude threshold, creating the behavior clusters including: determining that a first set of sensor readings of the first sensor data correspond to a first behavior type at least in part by determining that differences between the first set of sensor readings and sensor readings of other behavior types exceed a quality threshold; responsive to determining that the first set of sensor readings of the first sensor data correspond to the first behavior type, creating a first behavior cluster from the first set of sensor readings; determining a first set of one or more representative readings based on magnitudes of the first set of sensor readings; determining a first magnitude threshold based on variances in the first set of sensor readings; and associating the first set of one or more representative readings and the first magnitude threshold with the created first behavior cluster; comparing second sensor data with the behavior clusters, the second sensor data including readings from Appeal 2019-000919 Application 13/743,989 3 the one or more sensors received after the first sensor data is received; determining that a behavior type corresponding to the second sensor data is different from each behavior type of the behavior clusters upon determining, based on results of the comparison, that the second sensor data is out of range of each magnitude threshold corresponding to each respective cluster from each respective set of one or more representative readings; and causing the user device to perform a security action in response to determining that the behavior type corresponding to the second sensor data is different. REFERENCES The Examiner relies on the following references: Name Reference Date Young US 2003/0154072 A1 Aug. 14, 2003 Kashi US 2009/0049544 A1 Feb. 19, 2009 Oppenheimer US 2014/0089243 A1 Mar. 27, 2014 Akella US 9,092,802 B1 July 28, 2015 REJECTIONS Claims 1–11, 21–27, 31–35, and 37 are rejected under pre–AIA 35 U.S.C. § 103(a) as unpatentable over the combined teachings of Oppenheimer, Kashi, and Akella (Final Act. 2–10). Claim 33 is rejected under pre–AIA 35 U.S.C. § 103(a) as unpatentable over the combined teachings of Oppenheimer, Kashi, Akella, and Hsu (id. at 10–11). Claim 36 is rejected under pre–AIA 35 U.S.C. § 103(a) as unpatentable over the combined teachings of Oppenheimer, Kashi, Akella, and Young (id. at 11–12). Appeal 2019-000919 Application 13/743,989 4 We have only considered those arguments that Appellant raised in the Briefs. Arguments Appellant could have made but chose not to make in the Briefs have not been considered and are deemed waived (see 37 C.F.R. § 41.37(c)(1)(iv)). OPINION 35 U.S.C. § 103(a): Claims 1–11, 21–27, 31–35, and 37 Claims 1, 8, and 21 Appellant contends the method as recited in claim 1, is not obvious over the combined teachings of Oppenheimer, Kashi, and Akella (Appeal Br. 11, 22). In particular, Appellant contends neither Oppenheimer nor Akella discloses “creating behavior clusters” in the manner recited in claim 1. Appellant argues, “Oppenheimer generally discloses the use of ‘clusters’”; however, Oppenheimer fails to disclose “how these ‘clusters’ are created” (id. at 11). Rather, according to Appellant, “as Oppenheimer’s ‘clusters’ appear to be simply pre-defined by the programmer of the system, Oppenheimer fails to teach that behavior clusters are dynamically created for a specific user device based on motion and environment readings collected from that same user device” (id. (emphasis in original); Reply Br. 2). Appellant further argues, “Akella does not cure” Oppenheimer’s defects; rather “Akella simply discloses the use of ‘clustering’ in a highly generalized manner” (Appeal Br. 12). We do not find Appellant’s argument persuasive. The Examiner finds Oppenheimer (Final Act. 2–3 (citing Oppenheimer ¶¶ 401, 1375–1378, 1392, 1412–1413, 1505, Fig. 5B, Fig. 6B,)) in combination with Akella (id. at 5 (citing 7:9, 7:59, 8:50–54)) teaches the disputed limitations. Specifically, Akella discloses “the compressed relevance model may be Appeal 2019-000919 Application 13/743,989 5 generated as a sequence of clustering of usage time series, possibly a usage- based predictor, both together based on categorizing users, types, and devices” (Akella, 7:57–61). Akella further discloses: The User Behavior Clustering module may receive input from and provide output to the Big Data Store and the Macro/Micro Profiles (Graph) module. The User Behavior Clustering module may analyze groupings of data or potentially group data pertaining to user behavior as described herein. (id. at 8:50–54). Therefore, Akella teaches creating behavior clusters. Appellant argues Akella does not disclose the recited steps for creating behavior clusters (Appeal Br. 13). However, the Examiner relies on Oppenheimer to teach the recited steps of “determining that a first set of sensor readings. . .”; “determining a first set of one or more representative readings”; and “determining a first magnitude threshold” (Final Act. 3) and Akella to teach “responsive to determining that the first set of sensor readings . . .” and “associating the first set of one or more representative readings” (id. at 5). As noted by the Examiner, [O]ne cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. … it is the combination of references which render the claim obvious to a person of ordinary skill in the art. Specifically, Oppenheimer discloses the use of clusters to determine normal and abnormal use of the device. While Oppenheimer uses pre-created clusters for the states of the device, it is Akella which discloses the creation of new clusters. (Ans. 3–4). Here, Appellant is arguing the references individually while the Examiner is relying on the combination of Oppenheimer and Akella to teach the disputed limitations. Appellant does not provide specific arguments against the Examiner’s findings but generally states “Oppenheimer cannot possibly teach ‘creating Appeal 2019-000919 Application 13/743,989 6 behavior clusters’ by performing each of the specific steps recited in claim 1 (e.g., creating behavior clusters using a specific sequence of steps, upon satisfaction of certain specific conditions” (Appeal Br. 11). The Examiner, however, sets forth with specificity where each limitation is taught (Final Act. 2–6; see also Ans. 3–4 (further explaining why an ordinarily skilled artisan would have found the limitations obvious over the combined teachings of Oppenheimer and Akella)). Appellant additionally contends “Oppenheimer fails to teach that behavior clusters are dynamically created for a specific user device based on motion and environment readings collected from that same user device” (Appeal Br. 11). Again, Appellant is arguing the references individually. The Examiner relies on the combination of references to teach “creating the behavior clusters” while Appellant is arguing Oppenheimer and Akella individually. Accordingly, Appellant does not persuade us the combination of Oppenheimer and Akella fails to teach or suggest “creating the behavior clusters,” as recited in claim 1. Appellant challenges the rejection of claims 8 and 21 on the same basis as claim 1 (Appeal Br. 14–16); therefore, for the reasons set forth with respect to claim 1, Appellant does not persuade us the combination of Oppenheimer and Akella fails to teach or suggest “creating the behavior clusters,” as recited in claims 8 and 21. Accordingly, we sustain the rejection of claims 1, 8, and 21 under 35 U.S.C. § 103(a) for obviousness over the combined teachings of Oppenheimer, Kashi, and Akella. We further note, although not relied upon in affirming this rejection, Oppenheimer also discloses creating behavior clusters. For example, Appeal 2019-000919 Application 13/743,989 7 Oppenheimer describes a BIRD [(Portable Item Reporting Device2)] Training (Self Configuration) operation, in which “Realtime or Time- Proximate Environmental Conditions/Usage Conditions/Usage Data” is combined with a determination of “typical item usage based on sensor readings during BIRD training” (see Oppenheimer, Fig. 1D, item 700, item 1000). The system of Oppenheimer can determine behavior clusters either by a BIRD configuration pre-defined by a user or system administrator, or alternatively self-configured (id. ¶¶ 388–389). Specifically, Oppenheimer states: In an alternative embodiment, the ExD criteria (170) may be determined in whole or part by the BIRD (200) itself during one or more training sessions or configuration time periods. During the training periods, an authorized user (AU) (not shown) uses the item (for example, her keys (100.K) in her purse) in ways designed to train the BIRD (200) to distinguish normal item usage from anomalous item usage (503. 2). In this case, on-board BIRD Navigation (1000)–possibly augmented at points by BIRD logic (500) on a configuration computer (335) ––uses the sensor data (700) collected during the training period to determine normal (503.3) vs. anomalous (503.2) item usage, that is, ExD criteria (170). (id. ¶ 389 (emphases added)). Thus, rather than simply being “pre-defined by the programmer of the system,” as Appellant has argued, Oppenheimer discloses the system may itself, through a user, create behavior clusters based on sensor data. 2 The acronym was purposefully changed to BIRD from PIRD because “‘BIRD’ sounds beautiful and the letter ‘B’ looks much like the letter ‘P.’ Further, in both Danish and Norwegian, ‘portable’ is ‘brerbare,’ as well as being ‘biirbara’ in Swedish, so we find the letter ‘B’ for portable after all. Also, birds are generally pretty smart when it comes to finding their way home.” (Oppenheimer ¶¶ 38–39). Appeal 2019-000919 Application 13/743,989 8 Claim 34 Appellant contends that the method as recited in claim 34, is not obvious over the combined teachings of Oppenheimer, Kashi, and Akella and, in particular, that the combination fails to teach or suggest “wherein the first magnitude threshold is a standard deviation for the first set of sensors readings,” as recited in claim 34 (Appeal Br. 16). More specifically, Appellant argues, “‘a standard deviation for the first set of sensor readings’ (i.e., ‘the first magnitude threshold’) is used to determine when to perform a security action on a user device” (id. (emphasis in original); Reply Br. 4). However, according to Appellant, “Oppenheimer teaches that a ‘standard deviation’ is used to determine when to discard data entirely, such that the data is not used at all” (Appeal Br. 16). We are not persuaded. As explained by the Examiner, Oppenheimer “discloses the concept of using standard deviation to determine the value of the data (i.e., [outlier] or reliable)” (Ans. 5; see Final Act. 9 (citing Oppenheimer ¶¶ 2301, 2466)). Specifically, Oppenheimer discloses “[d]ata values previously recorded for the selected sensor, spanning the BIRD training period, are retrieved from the historical environmental data log via the data storage and management module” (Oppenheimer ¶ 2299 (citations omitted)). Oppenheimer further discloses “[o]utlier values may be determined based on a number of criteria including …: the number of standard deviations of variance from a mean or normal value for the data” (id. ¶ 2301). Thus, Oppenheimer discloses “wherein the first magnitude threshold is a standard deviation of the first set of sensor readings,” as claimed. Appeal 2019-000919 Application 13/743,989 9 Appellant also argues “[a]lthough Oppenheimer generally discloses that ‘standard deviation’ can be used to determine ‘outliers,’ Oppenheimer specifically teaches that outliers are ‘suppressed’ or ‘ignored’” (Appeal Br. 16; Reply Br. 4). We do not find Appellant’s argument persuasive. Oppenheimer states, “[a] determination as to whether to keep or ignore outlier values … may be made based on a number of parameters and criteria” –– not “suppress” or “ignore” (Oppenheimer ¶ 2301 (emphasis added)). Lastly, Appellant contends Oppenheimer fails to disclose “using ‘a standard deviation for [a] first set of sensors readings’ to determine when to perform a security action on a user device, a substantially different purpose” (Appeal Br. 16–17). The Examiner sets forth with specificity where Oppenheimer teaches the step of “perform[ing] a security action,” as recited in claim 1 (Final Act. 3 (citing Oppenheimer ¶¶ 572–574)). Appellant does not address the Examiner’s findings instead, Appellant only challenges the Examiner’s findings regarding determination of the first magnitude threshold. Accordingly, Appellant does not persuade us Oppenheimer fails to disclose “wherein the first magnitude threshold is a standard deviation for the first set of sensors readings,” as recited in claim 34. Therefore, we sustain the rejection of claim 34 under 35 U.S.C. § 103(a) for obviousness over the combined teachings of Oppenheimer, Kashi, and Akella. Claim 35 Appellant contends the method as recited in claim 35, is not obvious over the combined teachings of Oppenheimer, Kashi, and Akella (Appeal Br. 17). The issue presented by the arguments is whether the combination of Oppenheimer, Kashi, and Akella teaches, suggests, or otherwise renders Appeal 2019-000919 Application 13/743,989 10 obvious “determining that differences between the first set of sensor readings and sensor readings of other behavior types exceed a quality threshold comprising determining that the differences between the first set of sensor readings and sensor readings of other behavior types are greater than a threshold distance,” as recited in claim 35. More specifically, Appellant argues, “Oppenheimer does not contemplate anything analogous to ‘threshold distances’ at all, much less ‘determining that the differences between the first set of sensor readings and sensor readings of other behavior types are greater than a threshold distance’” (Appeal Br. 18; Reply Br. 5). We are not persuaded. Oppenheimer discloses, “determining that differences between the first set of sensor readings and sensor readings of other behavior types exceed a quality threshold,” as recited in claim 1 (Final Act. 3 (citing Oppenheimer ¶ 1375, Figs. 5B, 6B)). In addition, Oppenheimer describes that BIRD logic “provides criteria and/or methods to compare usage data . . . against usage expectations” (Oppenheimer ¶ 1375; Final Act. 9; Ans. 5; see also Oppenheimer ¶¶ 1381–1382, Fig. 5B (describing a plurality of clusters which contain criteria to determine which state the BIRD logic is currently in)). Moreover, Oppenheimer discloses determining “just how ‘acceptably close’ the usage data must be to the usage expectations” (id. ¶ 401; Final Act. 3). In light of this disclosure, we agree with the Examiner that Oppenheimer teaches or suggests comparing one set of sensor readings against another to determine a state of the item or if it is “otherwise in an anomalous state” (Ans. 5). We determine an ordinarily skilled artisan would have found it obvious to set the exceeding of a quality threshold to be “greater than a threshold distance,” as recited in claim 35, as this is Appeal 2019-000919 Application 13/743,989 11 comparing two sensor readings. Indeed, the determination of how close the usage data must be to the usage expectation, to be a value within a particular behavior cluster, teaches or suggests the claimed “threshold distance.” Appellant’s argument that “Oppenheimer does not teach how to create new clusters” (Appeal Br. 18) is not persuasive as the Examiner relied on Akella to teach this limitation (Final Act. 5; Ans. 5). As previously discussed with respect to claim 1, the Examiner is relying on the combination of Oppenheimer, Kashi, and Akella to teach limitations of claim 35 while Appellant is arguing the references individually. Therefore, we are not persuaded the combination of Oppenheimer, Kashi, and Akella fails to teach or suggest the limitation as recited in claim 35. Accordingly, we sustain the rejection of claim 35 under 35 U.S.C. § 103(a) for obviousness over the combined teachings of Oppenheimer, Kashi, and Akella. Remaining Dependent Claims Dependent claims 2–11, 21–27, 31–33, and 37 are not separately argued (see Appeal Br.); therefore, these claims fall with their respective independent claims. Accordingly, we sustain the rejection of claims 1–11, 21–27, 31–35, and 37 under 35 U.S.C. § 103(a) for obviousness over Oppenheimer, Kashi, and Akella. 35 U.S.C. § 103(a): Claim 36 Appellant contends the method as recited in claim 36, is not obvious over the combined teachings of Oppenheimer, Kashi, Akella, and Young (Appeal Br. 18) and, in particular contends the combination of Oppenheimer, Kashi, Akella, and Young fails to teach or suggest: Appeal 2019-000919 Application 13/743,989 12 “determining the first set of one or more representative readings for the first behavior cluster comprises” [(i)] determining . . . a maximum distance between the sensor reading and another sensor reading of the first behavior cluster; [and (ii)] identifying a particular sensor reading of the first behavior cluster having a lowest maximum distance,” as recited in claim 36 (id. at 19). Appellant argues “Young simply refers to a ‘K-means clustering algorithm’” but does not disclose “a maximum distance between the sensor reading and another sensor reading of the first behavior cluster," or "having a lowest maximum distance" (id.). Nor, according to Appellant, “does Young disclose or suggest performing such steps specifically to determine a ‘representative reading’ for a behavior cluster (id.; Reply Br. 7). Appellant further argues, “the Examiner did not address these particular steps at all, much less identify any particular teaching in the cited references that allegedly render obvious the performance of such steps” (Appeal Br. 20). We are not persuaded by Appellant’s contention that Young’s teaching of the K-means clustering algorithm, which the Examiner explained, fails to teach the disputed limitation. Appellant contends, “Young does not refer to distances at all, aside from generally assigning points to the ‘nearest cluster center’ without regard for a ‘maximum distance’ or a ‘lowest maximum distance’” (Reply Br. 7). However, the Examiner finds Young teaches the argued limitations (Final Act. 12 (citing Young ¶ 52)) and, more specifically, explains: [b]oth of these common clustering elements are found in Young in paragraph 52 which recites calculating the center of the cluster (a sensor reading with the lowest maximum distance would be zero, i.e. the center of the cluster) as well as finding the distance a data point can be from the center and still be part of that cluster (i.e. maximum distance between two sensor readings). Appeal 2019-000919 Application 13/743,989 13 (Ans. 6). Thus, Young teaches calculating the center of a cluster by finding the sensor with the lowest maximum distance. And by determining the distance a data point can be from the center, yet still be part of the cluster, Young teaches “determining, for each sensor reading of the first behavior cluster, a maximum distance between the sensor reading and another sensor reading of the first behavior cluster,” as recited in claim 36. Moreover, Young teaches determining average distance of cluster points to their cluster centroids and therefore, teaches determining the distance of each cluster point from its cluster centroid (Young ¶ 52). Appellant provides an “illustrative example” of various sensor readings (Reply Br. 7), but, Appellant has not identified how Young’s K-means clustering algorithm, which calculates distances from a centroid, fails to teach or suggest the disputed limitation when taken in combination with the other relied-upon references. Additionally, as set forth by the Examiner, the claim does not specify which sensor is the “another sensor” (Ans. 6). Appellant additionally argues Young does not teach “determin[ing] a ‘representative reading’ for a behavior cluster” (Reply Br. 7); however, the Examiner relies on Oppenheimer, not Young, to teach the limitation of “determining the first set of one or more representative readings for the first behavior cluster” (Final Act. 3) and Young to teach specific steps (id. at 12). Therefore, we are not persuaded the combination of Young and Oppenheimer fails to teach determining “determining the particular sensor reading as a representative reading for the first behavior cluster,” as recited in claim 36. Accordingly, Appellant does not persuade us the combination of Oppenheimer, Kashi, Akella, and Young fails to teach or suggest the Appeal 2019-000919 Application 13/743,989 14 limitations as recited in claim 36. Therefore, we sustain the rejection of claim 36 under 35 U.S.C. § 103(a) for obviousness over the combined teachings of Oppenheimer, Kashi, Akella, and Young. CONCLUSION The Examiner’s rejections are affirmed. More specifically, The rejection of claims 1–11, 21–27, 31–35, and 37 under pre–AIA 35 U.S.C. § 103(a) as unpatentable over Oppenheimer, Kashi, and Akella is affirmed; The rejection of claim 33 under pre–AIA 35 U.S.C. § 103(a) as unpatentable over Oppenheimer, Kashi, Akella, and Hsu is affirmed; and The rejection of claim 36 under pre–AIA 35 U.S.C. § 103(a) as unpatentable over Oppenheimer, Kashi, Akella, and Young is affirmed. DECISION SUMMARY In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–11, 21– 27, 31–35, 37 103(a) Oppenheimer, Kashi, Akella 1–11, 21– 27, 31–35, 37 33 103(a) Oppenheimer, Kashi, Akella, Hsu 33 36 103(a) Oppenheimer, Kashi, Akella, Young 36 Overall Outcome: 1–11, 21– 27, 31–37 Appeal 2019-000919 Application 13/743,989 15 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a) (see 37 C.F.R. § 1.136(a)(1)(iv)). AFFIRMED Copy with citationCopy as parenthetical citation