Ex Parte Wellman et alDownload PDFPatent Trial and Appeal BoardJul 21, 201713749618 (P.T.A.B. Jul. 21, 2017) Copy Citation % United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/749,618 01/24/2013 Richard W. Wellman 3128.2.3 9096 36491 7590 07/25/2017 Knn7ler T aw firm in EXAMINER 50 W. Broadway 10th Floor GRANT, MICHAEL CHRISTOPHER SALT LAKE CITY, UT 84101 ART UNIT PAPER NUMBER 3715 NOTIFICATION DATE DELIVERY MODE 07/25/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): docket @ kunzlerlaw .com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte RICHARD W. WELLMAN, KELLY D. PHILLIPPS, and DAVID B. GONZALEZ Appeal 2016-0007431 Application 13/749,618 Technology Center 3700 Before LINDA E. HORNER, ANNETTE R. REIMERS, and NATHAN A. ENGELS, Administrative Patent Judges. HORNER, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Richard W. Wellman et al. (Appellants) seek our review under 35 U.S.C. § 134(a) of the Examiner’s decision rejecting claims 13-27. Non- Final Office Action (December 22, 2014) (hereinafter “Non-Final Act.”). Claims 1-12 are withdrawn. Non-Final Act. 1. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 CloudVu, Inc. is an applicant as provided in 37 C.F.R. § 1.46. CloudVu, Inc. changed its name to PurePredictive, Inc. Appellants name PurePredictive, Inc. as the real party in interest. Appeal Brief 2 (June 18, 2015) (hereinafter “Appeal Br.”). Appeal 2016-000743 Application 13/749,618 CLAIMED SUBJECT MATTER Appellants’ claimed subject matter relates to “using machine learning to identity patterns in student engagement relative to electronic learning systems.” Spec. 11. Claims 13, 22, and 24 are the independent claims on appeal. Claim 13 is illustrative of the claimed subject matter and is reproduced below. 13. An apparatus for determining student engagement, the apparatus comprising: an activity monitor module configured to receive monitored electronic learning interactions of one or more students; a machine learning module configured to compare, using machine learning, the monitored electronic learning interactions to a plurality of archetypal learning patterns, the machine learning comprising a machine learning ensemble including a plurality of learned functions from multiple machine learning classes; and a result module configured to send an alert for at least one student of the one or more students based on the machine learning comparison. Appeal Br. 10 (Claims Appendix). REJECTIONS The Non-Final Office Action includes the following rejections: 1. Claims 13-27 stand rejected under 35 U.S.C. § 101 as being directed to patent ineligible subject matter. 2. Claims 13-16 and 18-27 stand rejected under 35 U.S.C. § 102(b) as anticipated by Sorenson (US 2013/0004930 Al, published January 3, 2013). 2 Appeal 2016-000743 Application 13/749,618 3. Claim 17 stands rejected under 35 U.S.C. § 103(a) as unpatentable over Sorenson and Intelligent Miner.2 ANALYSIS Rejection of claims 13-27 under 35 U.S.C. § 101 The Examiner’s rejection The Examiner determines that “[t]he claims are directed to the abstract idea of a method of organizing human activity” and, specifically, “methods for determining student engagement that could be performed by human beings alone.” Non-Final Act. 2. The Examiner further finds that “the additional elements or combinations of elements in the claims other than the abstract idea ... do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.” Id. at 3 (“[T]he claims include generic computer structures and functions that are well-understood and routine.”) Appellants ’ arguments Appellants contest these findings.3 Appellants argue that “the claims are directed to machine learning, not human activity, and a human cannot 2 The Examiner identifies “Intelligent Miner” as being an application of the Intelligent Miner computer program features as evidenced collectively by P. Cabena et al., “Intelligent Miner for Data Applications Guide,” IBM, pages 89-103 (March 1999) and Lo Yuk Ting et al. (US 2010/0131314 Al, published May 27, 2010). Non-Final Act. 13. 3 Appellants present arguments for claims 13-27 as a group for the first ground of rejection. Appeal Br. 3-6. We select claim 13 as representative of the group, and claims 14-27 stand or fall with claim 13. 37 C.F.R. § 41.37(c)(l)(iv). 3 Appeal 2016-000743 Application 13/749,618 perform the ‘machine learning’ limitations of the claims.” Appeal Br. 4. Appellants further argue that “the claims do not preempt the entire field of determining student engagement such that others could not practice it” and “the claims only exclude certain applications of machine learning to monitored electronic learning interactions.” Id. at 5. Appellants further draw parallels to the Federal Circuit’s holding in DDR Holdings, LLC v. Hotels.com, LP, 773 F.3d 1245 (Fed. Cir. 2014), arguing that “the problems addressed by the claims specifically arise in the realm of computer technology (e.g., the specifically recited electronic learning).” Appeal Br. 6. Appellants further argue “the claims add ‘significantly more’ than an abstract idea” because “machine learning is not a generic computer element such as a processor or memory, but is a specialized structure that serves a specialized technical purpose.” Id. Appellants assert: [T]he claims, as a whole, represent a significant improvement to the technical field of electronic learning, because comparisons that could not previously be reliably made (except by requiring a teacher to be present to observe and compare, thus destroying many of the distance learning benefits of electronic learning) can be made, via machine learning, by practicing the subject matter of the claims. Reply Brief 3 (October 19, 2015) (hereinafter “Reply Br.”). Legal Principles A patent may be obtained for “any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof.” 35 U.S.C. § 101. The Supreme Court has held that this provision contains an important implicit exception: Laws of nature, natural phenomena, and abstract ideas are not patentable. Alice Corp. v. CLS Bank Inti, 134 S. Ct. 2347, 2354 (2014); Gottschalkv. Benson, 409 U.S. 63, 67 4 Appeal 2016-000743 Application 13/749,618 (1972) (“Phenomena of nature, though just discovered, mental processes, and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work.”). Notwithstanding that a law of nature or an abstract idea, by itself, is not patentable, the application of these concepts may be deserving of patent protection. Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 71 (2012). In Mayo, the Court stated that “to transform an unpatentable law of nature into a patent eligible application of such a law, one must do more than simply state the law of nature while adding the words ‘apply it.”’ Id. at 72 (citation omitted). In Alice, the Court reaffirmed the framework set forth previously in Mayo “for distinguishing patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications of these concepts.” Alice, 134 S. Ct. at 2355. The first step in the analysis is to “determine whether the claims at issue are directed to one of those patent-ineligible concepts.” Id. In Enfish, LLC v. Microsoft Corporation, 822 F.3d 1327 (Fed. Cir. 2016), the Federal Circuit explained, “the first step in the Alice inquiry . . . asks whether the focus of the claims is on the specific asserted improvement in computer capabilities ... or, instead, on a process that qualifies as an ‘abstract idea’ for which computers are invoked merely as a tool.” Id. at 1335—36. In FairWarning IP, LLC v. Iatric Systems, Inc., 839 F.3d 1089 (Fed. Cir. 2016), the Federal Circuit affirmed a district court determination that claims directed to “the concept of analyzing records of human activity to detect suspicious behavior” were not directed to patent eligible subject matter. Id. at 1094. The claims in FairWarning recited, generally, a method 5 Appeal 2016-000743 Application 13/749,618 of detecting improper access of a patient’s protected health information in a computer environment, comprising generating a rule for monitoring transactions/activity in an audit log, applying the rule to the audit log data, storing hits, and providing notifications of hits. Id. The Federal Circuit determined that the claims were directed to an abstract idea: We have explained that the “realm of abstract ideas” includes “collecting information, including when limited to particular content.” Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016) (collecting cases). We have also “treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category.” Id. And we have found that “merely presenting the results of abstract processes of collecting and analyzing information, without more (such as identifying a particular tool for presentation), is abstract as an ancillary part of such collection and analysis.” Id. Here, the claims are directed to a combination of these abstract-idea categories. Specifically, the claims here are directed to collecting and analyzing information to detect misuse and notifying a user when misuse is detected. See id. Id. at 1094-95; see also Intellectual Ventures I LLC v. Capital One Financial Corp., 850 F.3d 1332, 1340 (Fed. Cir. 2017) (noting that the Federal Circuit has held on numerous occasions that an invention directed to collection, manipulation, and display of data was an abstract process). The court in FairWarning distinguished the claims before it from the claims in McRO, Inc. v. Bandai Nanco Games America Inc., 837 F.3d 1299 (Fed. Cir. 2016), which were directed to “a specific asserted improvement in computer animation, i.e., the automatic use of rules of a particular type.” 839 F.3d at 1094 (citing McRO, 837 F.3d at 1314). The court in McRO explained that “it [was] the incorporation of the claimed rules, not the use of 6 Appeal 2016-000743 Application 13/749,618 the computer that ‘improved [the] existing technological process’ by allowing the automation of further tasks.” McRO, 837 F.3d at 1313 (alteration in original) (quoting Alice, 134 S. Ct. at 2358). By contrast, “FairWaming’s claims merely implement an old practice in a new environment” using “the same questions . . . that humans in analogous situations detecting fraud have asked for decades, if not centuries.” FairWarning, 839 F.3d 1094-95. “Although FairWaming’s claims require the use of a computer, it is this incorporation of a computer, not the claimed mle, that purportedly ‘improve[s] [the] existing technological process’ by allowing the automation of further tasks.” Id. at 1095 (quoting Alice, 134 S.Ct.at2358). If the claims are directed to a patent-ineligible concept, then the second step in the analysis is to consider the elements of the claims “individually and ‘as an ordered combination’” to determine whether there are additional elements that ‘“transform the nature of the claim’ into a patent-eligible application.” Alice, 134 S. Ct. at 2355 (quoting Mayo, 566 U.S. at 79). In other words, the second step is to “search for an ‘inventive concept’— i.e., an element or combination of elements that is ‘sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.’” Id. (brackets in original) (quoting Mayo, 566 U.S. at 72-73). The prohibition against patenting an abstract idea “cannot be circumvented by attempting to limit the use of the formula to a particular technological environment or adding insignificant post solution activity.” Bilski v. Kappos, 561 U.S. 593, 610-11 (2010) (citation and internal quotation marks omitted). The Court in Alice noted that 7 Appeal 2016-000743 Application 13/749,618 ‘“[s]imply appending conventional steps, specified at a high level of generality,’ was not ‘enough’ [in Mayo] to supply an ‘inventive concept.’” Alice, 134 S. Ct. at 2357 (quoting Mayo, 566 U.S. at 82, 77, 72). In DDR Holdings, the Federal Circuit held that claims “directed to systems and methods of generating a composite web page that combines certain visual elements of a ‘host’ website with content of a third-party merchant” contained the requisite inventive concept. 773 F.3d at 1248. The court explained that the claims at issue involved a technological solution that overcame a specific challenge unique to the Internet. Id. at 1259. In BASCOM Global Internet Servs., Inc. v. AT&T Mobility LLC, 827 F.3d 1341, 1348 (Fed. Cir. 2016), the court similarly held that claims “directed to filtering content on the Internet” contained an inventive concept. The court found “an inventive arrangement” of “known, conventional pieces” through “the installation of a filtering tool at a specific location, remote from the end-users, with customizable filtering features specific to each end user.” Id. at 1350. The claimed custom filter could be located remotely from the user because the invention exploited the ability of Internet service providers to associate a search request with a particular individual account. Id. This technical solution overcame defects in prior art embodiments and elevated an otherwise abstract idea to a patentable invention. Id. By contrast, if the claim language “provides only a result-oriented solution, with insufficient detail for how a computer accomplishes it,” then the claims do not contain an “inventive concept” under Alice step 2. Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1342 (Fed. Cir. 2017); see also Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 8 Appeal 2016-000743 Application 13/749,618 1350, 1354 (Fed. Cir. 2016) (explaining that claims are directed to an abstract idea where they do not recite “any particular assertedly inventive technology for performing [conventional] functions”). Step 1 of Alice We agree with the Examiner that the subject matter of claim 13 is directed generally to the abstract idea of organizing human activity. In particular, claim 13, when viewed as a whole, is directed to determining student engagement through a combination of the abstract-idea categories of collection, manipulation, and display4 of student interaction data. We disagree with Appellants’ characterization of the claims as being directed to machine learning. Appeal Br. 4. Appellants’ argument considers only individual elements of the claims and does not consider the claimed subject matter as a whole. The recitation of “machine learning” at most limits the claimed invention to a particular field of use. Limitations of the invention to a particular technological environment in which to apply the underlying abstract concept, “do not make an abstract concept any less abstract under step one.” Intellectual Ventures /, 850 F.3d at 1340. Although the use of machine learning adds a degree of particularity to the claims, the underlying concept embodied by the limitations merely encompasses the abstract idea itself of collection, manipulation, and display of student interaction data. See Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 715 (Fed. Cir. 2014) (“[A]ny novelty in implementation of the idea is a factor to be considered only in the second step of the Alice analysis.”). 4 We use “display” to refer to the claimed “result module.” 9 Appeal 2016-000743 Application 13/749,618 Appellants’ attempt to analogize the claimed subject matter with the claims at issue in DDR Holdings, LLC v. Hotels.com, LP, 773 F.3d 1245 (Fed. Cir. 2014) is misplaced. Appeal Br. 5-6 (Appellants arguing that “the problems addressed by the claims specifically arise in the realm of computer technology”). Unlike the situation in DDR Holdings, in which the problem of retaining website visitors was a challenge particular to the Internet {id. at 1247), the problem addressed by the claims of evaluating a student’s interaction with course material is not a challenge particular to a computer environment. The claims here recite the performance of a business practice (determining student engagement) known from the pre-electronic learning world along with the requirement to perform it on a computer. See Appeal Br. 4—5 (citing Spec. Tflf 2—3); Spec. 12 (“Th[e] lack of personal contact [between teachers and students] can make it difficult for academic institutions to determine whether students using electronic learning materials are engaged by the material, whether students are likely to succeed academically, or the like.”). The fact that the claimed solution involves computer technology is not surprising considering that the challenge was to implement the pre-electronic learning business practice on a computer. In DDR Holdings the court focused on the problem solved by the invention and not on the solution. The court characterized the problem in DDR as “a problem specifically arising in the realm of computer networks” because in the pre-Internet world, there were no websites. Id. at 1257. In this case, the problem addressed by the claims is one of determining student engagement. Even in an electronic learning environment, the problem does not specifically arise in the realm of computer networks. Indeed, Appellants 10 Appeal 2016-000743 Application 13/749,618 acknowledge that humans routinely assess student engagement, although Appellants contend increasing use of electronic learning materials computer- based learning and a decrease in student-teacher interactions making it more difficult for humans to assess student engagement. See Spec. 12; Appeal Br. 2—A. The fact that the claim is directed to a mechanism for using a computer to monitor these interactions and process the data to form observations does not remove the claim from the realm of an abstract idea. Appellants further argue that “machine learning, by its very nature and the plain definition of ‘machine,’ must be performed by a statutory machine.” Appeal Br. 5. “The bare fact that a computer exists in the physical rather than purely conceptual realm ‘is beside the point.’” DDR, 773 F.3d at 1256 (quoting Alice, 132 S. Ct. at 2358). For these reasons, we determine that claim 13 is directed to an abstract idea of determining student engagement for which a computer is invoked merely as a tool. Step 2 of Alice Claim 13 does not add significantly more to the abstract idea. Appellants do not contest the fact that “machine learning” was a known technique at the time of the invention. In fact, “machine learning” is a term of art as evidenced by the Examiner’s definition of the term as found in a dictionary. Ans. 4. Claim 13 amounts to a recitation of using conventional and generic machine learning to automate the processing of monitored interactions between a user and a computer. Appellants argue that “machine learning is not a generic computer element such as a processor or memory, but is a specialized structure that serves a specialized technical purpose.” Appeal Br. 6 (comparing machine learning to the thermocouple in Diamond v. Diehr, 450 U.S. 175, 178 (1981) 11 Appeal 2016-000743 Application 13/749,618 and arguing that “machine learning is not unknown, but neither is it generic.”). Although machine learning is not simply a generic computer, it is a known technique in computing, and Appellants fail to claim a specific type of machine learning. Rather, Appellants’ claim 13 recites using machine learning generally on raw data to identify patterns in the data, which is a fundamental concept of machine learning. Thus, claim 13 does not present a new technical solution, and we perceive no “inventive concept” that transforms the abstract idea of collecting, manipulating, and displaying student interaction data into a patent-eligible application of that abstract idea. Such a general and generic recitation of machine learning, without particularity, is simply not enough under step two of the Alice test. See Ultramercial, 772 F.3d at 715—16 (holding the claims insufficient to supply an inventive concept because they did not “do significantly more than simply describe [the] abstract method,” but rather are simply “conventional steps, specified at a high level of generality”) (quoting Alice, 134 S. Ct. at 2357). Appellants contend that the claims recite “significantly more” than just apply an abstract idea because: using machine learning to compare monitored electronic learning interactions to archetypal learning patterns, and to send an alert for a student or an evaluation for electronic learning material, based on the machine learning comparison, especially using “a machine learning ensemble including a plurality of learned functions from multiple machine learning classes,” is a significant improvement to the technical field of electronic learning. Reply Br. 5. We conclude, however, that the machine learning ensemble of claim 13 does not sufficiently transform the abstract concept into a 12 Appeal 2016-000743 Application 13/749,618 patentable invention under step two. In particular, we fail to see, and the Specification fails to clearly evidence, how the use of a machine learning ensemble is a technological improvement over, or differs from, the general concept of machine learning. In particular, “a machine learning ensemble including a plurality of learned functions from multiple machine learning classes” —although technical sounding—simply describes the general concept of machine learning itself. The mere fact that Appellants applied coined labels to conventional processes does not make the underlying concept inventive. See, e.g., Alice, 134 S. Ct. at 2352 n.2, 2360 (finding the claims abstract despite the recitation of technical-sounding names such as “shadow credit record[s]” and “shadow debit record[s]”). The recited “machine learning” and “machine learning ensemble” provide little more than an unspecified set of rules for recognizing patterns based on raw data and comparing monitored student interaction data to these patterns, akin to the general known process of machine learning. As such, the claim language amounts to the application of known “machine learning” to an abstract idea and does not impart an inventive concept to the abstract apparatus. For these reasons, we sustain the rejection of claim 13, and claims 14- 27 which fall with claim 13, under 35 U.S.C. § 101. Rejection of claims 13-16 and 18-27 under 35 U.S.C. § 102(b) as anticipated by Sorenson The Examiner’s Rejection The Examiner finds that “‘[mjachine learning’ is a term of art and one definition of it is ‘a branch of artificial intelligence in which a computer 13 Appeal 2016-000743 Application 13/749,618 generates rules underlying or based on raw data that has been fed into it.” Ans. 4 (citing http://dictionary.reference.com/browse/machine+leaming). The Examiner further states “[i]n other words, the broadest reasonable interpretation of the term ‘machine learning’ would describe computer software whereby the algorithm that the computer employs may change over time based on data that is fed into the algorithm.” Ans. 4. Appellants ’ Position Appellants do not disagree with the dictionary definition, but Appellants contend that the Examiner’s restatement of the definition is unreasonably broad because “significant elements of the dictionary definition are omitted” and that “a computer that ‘generates’ rules underlying (or based on) raw data is creating new rules based on raw data, not merely changing an algorithm (in a possibly predetermined way) based on data.” Reply Br. 3. Appellants contend that Sorenson teaches merely adjusting baselines and behavioral observations as data is acquired and fails to teach generating a rule that underlies raw data. Reply Br. 4. Thus, Appellants argue that Sorenson does not use machine learning. Id. at 6 (“the measures of ‘Text Reading Approach,’ and the like, identified by the Answer as Teamed functions’ are in no way Teamed’ within any reasonable interpretation consistent with the [Specification, but are programmatically defined instead”). Legal Principles “Both anticipation under § 102 and obviousness under § 103 are two- step inquiries. The first step in both analyses is a proper constmction of the claims .... The second step in the analyses requires a comparison of the 14 Appeal 2016-000743 Application 13/749,618 properly construed claim to the prior art.” Medichem, S.A. v. Rolabo, S.L., 353 F.3d 928, 933 (Fed. Cir. 2003) (internal citations omitted). We determine the scope of the claims in patent applications not solely on the basis of the claim language, but giving claims “their broadest reasonable interpretation consistent with the specification” and “in light of the specification as it would be interpreted by one of ordinary skill in the art.” In re Am. Acad. ofSci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004). Claim Construction of “Machine Learning ” We agree with and adopt the dictionary definition of “machine learning” that was proffered by the Examiner and not contested by Appellants. This definition defines “machine learning” as “a branch of artificial intelligence in which a computer generates rules underlying or based on raw data that has been fed into it.” See Ans. 4 (citing http://dictionary.reference.com/browse/machine+leaming). We decline to adopt the Examiner’s broadened restatement of this definition to encompass “computer software whereby the algorithm that the computer employs may change over time based on data that is fed into the algorithm.” Ans. 4. This restatement omits a key feature of machine learning, which, as we understand the term as used in the art, requires the computer to generate new rules based on raw data and not simply to change existing rules based on data. Reply Br. 3. Anticipation by Sorenson Based on the interpretation of “machine learning” set forth above, for the reasons that follow, the Examiner’s finding that Sorenson discloses “machine learning” as recited in claims 13-16 and 18-27 is not supported by 15 Appeal 2016-000743 Application 13/749,618 a preponderance of the evidence. Specifically, Sorenson discloses an analyzing function 150 that uses information gathered by monitoring a user’s interaction with course material to generate behavioral observations about the interaction. Sorenson, para. 97. For example, Sorenson describes that the analyzing function 150 can compare the time it took a user to read a passage with a baseline reading rate assigned to the passage and generate a behavioral observation (e.g., the reader is not devoting sufficient time to the task of reading the passage). Id. Sorenson discloses further updating this behavioral observation based on additional input (e.g., if the user’s scores for quizzes about the passage show that the user had a thorough understanding of the material). Id. at para. 98. The table provided in paragraph 108 of Sorenson details further examples of rules employed by the computer to derive various learning behaviors based on data gathered from monitoring the user’s interactions with course materials. For example, if the collected data show that the user engaged in intra-directional navigation through the material, left the browser environment to engage other applications, and/or made multiple exits and entrances into a course, the data would indicate that the user is distracted. Sorenson does not, however, clearly disclose that these rules, such as set forth in paragraph 108, are generated by the computer based on raw data received from monitoring the user’s interactions. As pointed out by Appellants, it is possible that the rules set forth in Sorenson are predefined as part of the algorithm employed by the computer and the computer does not generate new rules based on raw data. Instead, as discussed in Sorenson, the computer may update the behavioral observations based on further data, 16 Appeal 2016-000743 Application 13/749,618 but these updates to the observations could also be based on predefined rules as part of the computer algorithm. For these reasons, we do not sustain the Examiner’s rejection of claims 13-16 and 18-27 under 35 U.S.C. § 102(b) as anticipated by Sorenson. Rejection of claim 17 under 35 U.S.C. § 103(a) as unpatentable over Sorenson and Intelligent Miner In the rejection of claim 17, the Examiner relies on the deficient finding that Sorenson discloses machine learning as the basis for further modifying Sorenson with the disclosure of Intelligent Miner. Non-Final Act. 13-14. For the same reasons set forth above in the analysis of the anticipation rejection, we likewise do not sustain the rejection of claim 17 under 35 U.S.C. § 103(a) as unpatentable over Sorenson and Intelligent Miner. DECISION The decision to reject claims 13-27 under 35 U.S.C. § 101 is affirmed. The decision to reject claims 13-16 and 18-27 under 35 U.S.C. § 102(b) and claim 17 under 35 U.S.C. § 103(a) is reversed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 17 Copy with citationCopy as parenthetical citation