Ex Parte Satish et alDownload PDFPatent Trial and Appeal BoardSep 12, 201713742218 (P.T.A.B. Sep. 12, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/742,218 01/15/2013 Sourabh Satish 32745-21981/US 8354 34415 7590 09/14/2017 SYMANTRfV FF.NWTPK EXAMINER SILICON VALLEY CENTER PEREZ-ARROYO, RAQUEL 801 CALIFORNIA STREET MOUNTAIN VIEW, CA 94041 ART UNIT PAPER NUMBER 2169 NOTIFICATION DATE DELIVERY MODE 09/14/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): ptoc @ fenwick.com bhoffman @ fenwick.com sfuentes @ fenwick. com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte SOURABH SATISH, GOVIND SALINAS, and VINCENT CHEONG Appeal 2017-005726 Application 13/742,2181 Technology Center 2100 Before JEFFREY S. SMITH, JOHNNY A. KUMAR, and TERRENCE W. McMILLIN, Administrative Patent Judges. McMILLIN, Administrative Patent Judge. DECISION ON APPEAL This is a decision on appeal under 35 U.S.C. § 134(a) of the final rejection of claims 1—17, 19, and 20. Final Act. 1. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 According to Appellants, the real party in interest is Veritas Technologies LLC (App. Br. 2). Appeal 2017-005726 Application 13/742,218 THE CLAIMED INVENTION The present invention generally relates to computer security and more particularly to classifying samples, such as samples of computer files. Spec. 11. Independent claim 1 is directed to computer-implemented method; independent claim 9 is directed to a computer; and independent claim 15 is directed to a non-transitory computer-readable storage medium. App. Br. 12, 14, 16. Claim 1 recites: 1. A computer-implemented method of classifying a sample, comprising: establishing a set of samples containing labeled and unlabeled samples; gathering values of features from the labeled and unlabeled samples; selecting a subset of the features; clustering together labeled and unlabeled samples having at least a threshold measure of similarity among the gathered values of the selected subset of features to produce a set of clusters, each cluster having a subset of samples from the set of samples; recursively iterating the selecting and clustering steps on the subset of samples in each cluster in the set of clusters until at least one stopping condition is reached, wherein there are a plurality of iterations of the selecting and clustering steps, a first clustering iteration uses a first threshold measure of similarity and subsequent clustering iterations use increasingly stricter similarity thresholds, the iterations produce a cluster having a labeled sample and an unlabeled sample, and the selecting step selects different subsets of the features for different iterations of the plurality of iterations by: 2 Appeal 2017-005726 Application 13/742,218 determining an amount of variance in values of features of samples in a cluster of the set of clusters produced by a clustering iteration; and selecting a subset of features for a subsequent clustering iteration from among available features responsive to the determined amount of variance in values of features of samples in the cluster of the set of clusters produced by the clustering iteration; and propagating a label from the labeled sample in the cluster to the unlabeled sample in the cluster to classify the unlabeled sample. REJECTIONS ON APPEAL Claims 1, 2, 4, 8—11, 14—17, and 20 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Ribnick et al. (US 2013/0202200 Al, published Aug. 8, 2013) (“Ribnick”), Gutta et al. (US 6,801,917 B2, published Oct. 5, 2004) (“Gutta”), and Ben (US 8,473,532 Bl, published June 25, 2013). Final Act. 2. Claims 3, 5, 6, 12, 13, and 19 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Ribnick, Gutta, and Schipka (US 2009/0013405 Al, published Jan. 8, 2009).2 Final Act. 23. Claim 7 stands rejected under 35 U.S.C. § 103(a) as being unpatentable over Ribnick, Gutta, and Han et al. (US 8,515,193 Bl, published Aug. 20, 2013).3 Final Act. 33. 2 The statement of rejection lists Ribnick, Gutta, and Schipka, but should properly also list Ben, which is relied upon in the grounds of rejection for independent claims 1, 8, and 15, upon which these claims depend. See Final Act. 23. We find the Examiner’s omission of Ben to be harmless error. 3 The statement of rejection lists Ribnick, Gutta, and Han, but should properly also list Ben, which is relied upon in the grounds of rejection for 3 Appeal 2017-005726 Application 13/742,218 ANALYSIS We have reviewed the Examiner’s rejections in light of Appellants' arguments that the Examiner erred. We are not persuaded that Appellants identify reversible error. Upon consideration of the arguments presented in the Appeal Brief, we agree with the Examiner that all the pending claims are unpatentable over the cited combination of references. We adopt as our own the findings and reasons set forth in the rejection from which this appeal is taken and in the Examiner’s Answer. We provide the following explanation to highlight and address specific arguments and findings primarily lor emphasis. Regarding claim 1, Appellants contend “Ben does not teach or suggest that ‘a first clustering iteration uses a first threshold measure of similarity and subsequent clustering iterations use increasingly stricter similarity threshold’ as claimed.” App. Br. 8. Specifically, Appellants argue Ben teaches “that the threshold is ‘dynamically adjusted]... such that the process will produce a group that is within the desired size’” which “may include decreasing the threshold (i.e., using less strict similarity thresholds).” Id. As cited by the Examiner, Ben discloses: The process starts by identifying the total number of items “n ” (files or folders) to be clustered. It compared each pair of items to determine how similar the pairs of items are.... Based on the results, the process then determines a threshold in any conventional manner such as by taking the average similarity, the median or another percentile as the threshold. The process clusters files by partitioning the n given items into smaller and smaller groups. It starts with the n items as the initial group. It independent claims 1, 8, and 15, upon which these claims depend. See Final Act. 33. We find the Examiner’s omission of Ben to be harmless error. 4 Appeal 2017-005726 Application 13/742,218 compares each pair of items in the group. If a pair (j, k) of items in the group have a similarity less than the threshold, then it splits the group into two groups. . . . The process is continued in an iterative manner until a group is found wherein all pairs of items have a similarity larger than the threshold. . . . The limit is use to dynamically adjust the threshold such that the process will produce a group that is within the desired size. As shown in the pseudo code, the threshold is increased if all resulting groups are too large and is decreased if all resulting groups are too small. Ben col. 8,11. 47—62, 65—67, col. 9,11. 8—12 (emphasis added). The Examiner finds, and we agree, Ben teaches “iterative clustering based on feature similarity threshold with first threshold value” and “dynamically increasing similarity threshold to be used in the clustering iterations, where increasing the similarity threshold means that the similarity amongst the clustered items must be even greater, in other words, stricter.” Final Act. 7. Appellants contend relying on Ben’s teaching of a clustering criteria including producing a group that is within a desired size is based on impermissible hindsight. App. Br. 9. Specifically, Appellants argue that Ben’s teaching to “decrease the similarity threshold in certain circumstances” would lead away from the claimed invention, and Ben’s clustering iterations using increasingly strict thresholds is not sufficient to establish inherency. App. Br. 10. We note a reference does not teach away if it merely expresses a general preference for an alternative invention but does not “criticize, discredit, or otherwise discourage” investigation into the invention claimed. In re Fulton, 391 F.3d 1195, 1201 (Fed. Cir. 2004). Ben does teach both that the “threshold is increased if all resulting groups are too large” and that 5 Appeal 2017-005726 Application 13/742,218 the threshold “is decreased if all resulting groups are too small” in order to “produce a group that is within the desired size.” Ben col. 9,11. 9-12. However, Ben’s teaching to decrease the threshold if all resulting groups are too small does not criticize, discredit, or otherwise discourage increasing the threshold if all resulting groups are too large. Ben merely provides an alternative to increasing the threshold. We agree with the Examiner’s findings that the “fact that Ben is performing an additional step of evaluating the minimum limit, does not preclude that [] Ben’s threshold is based on similarity” and “that it can be adjusted to a higher or increased threshold.” Ans. 2. Appellants’ contend the Examiner’s rejection relies upon impermissible hindsight, because the Examiner’s proffered rationale for the combination, specifically that Ben’s clustering produces “a group that is within the desired size,” does not address the Appellants’ motivation for the claimed invention of “‘caus[ing] only relatively similar file samples to be clustered together and result in more clusters each containing fewer samples’ in subsequent clustering iterations.” App. Br. 9 (citing Spec. 1 54). Whether Appellants may have been driven by a different motivation than the rationale supplied by the Examiner does not negate obviousness. See KSR Int 7 Co. v. Teleflex Inc., 550 U.S. 398, 419 (2007) (“In determining whether the subject matter of a patent claim is obvious, neither the particular motivation nor the avowed purpose of the patentee controls. What matters is the objective reach of the claim.”). Therefore, we find Appellants’ allegation of impermissible hindsight is unsupported in the record. Appellants provide no evidence that combining the teachings of Ribnick, Gutta, and Ben, as proffered by the 6 Appeal 2017-005726 Application 13/742,218 Examiner (Final Act. 7), would have been “uniquely challenging or difficult for one of ordinary skill in the art” {Leapfrog Enters. Inc. v. Fisher-Price, Inc., 485 F.3d 1157 (Fed. Cir. 2007)), nor have Appellants provided any objective evidence of secondary considerations, which our reviewing court guides “operates as a beneficial check on hindsight.” Cheese Systems, Inc. v. Tetra Pak Cheese and Powder Systems, Inc., 725 F.3d 1341, 1352 (Fed. Cir. 2013). We are further unpersuaded of Examiner error regarding Ben’s teachings being insufficient to establish inherency because the Examiner did not make inherency findings. Appellants have not provided persuasive evidence that clustering items starting with an initial measure of similarity for an initial group, and iteratively clustering iterations requiring more similarity and resulting in smaller groups, as required by claim 1, is not taught or otherwise suggested by Ben’s clustering n items with an initial group of items and partitioning the items into smaller and smaller groups utilizing similarity thresholds. Accordingly, we sustain the § 103 rejection of claim 1, as well as the rejection of claims 2—17, 19, and 20 not separately argued. DECISION The rejections of claims 1—17, 19, and 20 are affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 7 Copy with citationCopy as parenthetical citation