DELL PRODUCTS L.P.Download PDFPatent Trials and Appeals BoardJun 18, 202014857600 - (D) (P.T.A.B. Jun. 18, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/857,600 09/17/2015 Seth Mercur Feder DC-104895 (20110-1934) 4299 82976 7590 06/18/2020 NORTH WEBER & BAUGH LLP -- Dell 3260 Hillview Avenue PALO ALTO, CA 94304 EXAMINER BARR, MARY EVANGELINE ART UNIT PAPER NUMBER 3626 NOTIFICATION DATE DELIVERY MODE 06/18/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): bbaugh@northweber.com docket1@northweber.com docket2@northweber.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte SETH MERCUR FEDER, CARRIE ELAINE GATES, and GABRIEL MAURICIO SILBERMAN __________________ Appeal 2020-000278 Application 14/857,600 Technology Center 3600 ____________________ Before JAMES P. CALVE, BENJAMIN D. M. WOOD, and LEE L. STEPINA, Administrative Patent Judges. CALVE, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the decision of the Examiner to reject claims 1–20. Appeal Br. 6. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 “Appellant” refers to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies Dell Products, L.P. as the real party in interest. Appeal Br. 4. Appeal 2020-000278 Application 14/857,600 2 CLAIMED SUBJECT MATTER Claims 1, 8, and 15 are independent. Claim 1 is reproduced below. 1. A health risk prediction system for predicting a health risk of a user, the system comprising: a data storage that stores a health model that has been trained using context data collected from consumer product sensors made for a set of principally non- medical devices; and a personal predictive medical analytics system that: monitors one or more of the consumer product sensors; based on the health model and at least some data from the one or more of the consumer product sensors, determines whether an anomaly that is health-related is possible; in response to determining that the anomaly is possible, uses the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values; uses the health model and the additional sensor values to infer a context that is indicative of whether the anomaly exists; in response to determining that the anomaly does not exist, returns to the step of monitoring one or more of the consumer product sensors; and based on determining that the anomaly is present, triggers an action. REJECTIONS Claims 1–20 are rejected under 35 U.S.C. § 112(a) for lack of written description. Claims 7 and 14 are rejected under 35 U.S.C. § 112(b) for indefiniteness. Appeal 2020-000278 Application 14/857,600 3 Claims 1–20 are rejected as directed to a judicial exception to 35 U.S.C. § 101. Claims 1–11, 13, 14, and 18–20 are rejected under 35 U.S.C. § 103 as unpatentable over Syed (US 2012/0059779 A1, pub. Mar. 8, 2012) and Zhang (US 2015/0269824 A1, pub. Sept. 24, 2015). Claims 12, 16, and 17 are rejected under 35 U.S.C. § 103 as unpatentable over Syed, Zhang, and Ricci (US 9,317,983 B2, iss. Apr. 19, 2016). ANALYSIS Claims 1–20 for Lack of Written Description Appellant argues the claims as a group. Appeal Br. 9–12. We select claim 1 as the representative claim. See 37 C.F.R. § 41.37(c)(1)(iv). Regarding claim 1, the Examiner determines that the Specification does not describe the limitation “use the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values” sufficiently to reasonably convey to a skilled artisan that Appellant had possession of this claimed subject matter. Final Act. 2. In particular, the Examiner finds that the Specification lacks support for the concept of determining from which sensor to obtain values. Id. at 2–3. The Examiner further explains at page 3 of the Final Office Action: The specification describes checking other sensors after determining there is an anomaly {0051] and in Figure 5, there is a step of check other sensors after it is determined that an anomaly exists and there is worry. However, this does not include choosing a sensor to check or obtain additional values from. It also does not describe how the model is used to determine the sensor. There is no disclosure of how one would determine one sensor to obtain additional values from as opposed to another sensor. Appeal 2020-000278 Application 14/857,600 4 In response, Appellant argues that “the written description contains numerous examples that demonstrate how the claimed health risk prediction system may refer to a particular source for information, e.g., a GPS sensor, or ‘other sources of information’ [and] discloses ‘what (available) sensors should be monitored and with what frequency’ . . . . to determine if the user has been regularly exercising and what foods the user has been eating.’” Appeal Br. 10 (citing Spec. ¶¶ 32, 34, 50, Fig. 5). Appellant asserts that the answer to the question of what sensors to monitor is to choose one or more of the consumer product sensors that perform the step of monitoring and that provide antecedent basis for determining from which of the one or more consumer product sensors to obtain additional values. See id. at 10–11. Principles of Law The Patent Laws require “[t]he specification shall contain a written description of the invention.” 35 U.S.C. § 112(a). The written description must reasonably convey to a skilled artisan that the inventor had possession of the claimed subject matter as of the filing date. Ariad Pharms., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1351 (Fed. Cir. 2010) (citations omitted). The test requires an objective inquiry into the four corners of the specification from the perspective of a person of ordinary skill in the art. Id. Although the written description requirement does not require claimed subject matter to be described identically (Ariad, 598 F.3d at 1352), “[t]he appearance of mere indistinct words in a specification or a claim, even an original claim, does not necessarily satisfy” its requirements, if it does not describe the claimed invention so a skilled artisan can recognize what is claimed (Enzo Biochem, Inc. v. Gen-Probe Inc., 323 F.3d 956, 968–69 (Fed. Cir. 2002). Appeal 2020-000278 Application 14/857,600 5 The Examiner has the better position. The issue is not whether the Specification adequately describes the limitation of checking sensors after an anomaly is determined. The Examiner and Appellant agree that this feature is adequately described. See Final Act. 2–3; Ans. 4; Appeal Br. 10. The issue is whether, after data from the consumer product sensors indicates that an anomaly is possible, the Specification describes “us[ing] the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values” as claimed. We determine that the Specification does not describe this claimed subject matter sufficiently to convey to a skilled artisan that Appellant possessed this subject matter at the time of the filing of the application. Specifically, it does not describe how the health model determines from which sensor to obtain additional values. Paragraph 32 of the Specification describes refinements to the medical model “may be possible using the outcome of the diagnosis phase above, in terms of what (available) sensors should be monitored and with what frequency, and any shortcuts (e.g., irrelevant sensors) available to arrive at a diagnostic [sic] faster.” Thus, the model updates sensors that are monitored for ongoing operations (and the frequency of monitoring them) to determine if an anomaly is possible. This description does not address what happens after the system determines an anomaly is possible. Nor does it describe the model determining from which sensors to obtain additional sensor values “in response to determining that the anomaly is possible.” Paragraph 50 of the Specification describes this process as ongoing operations gathering input from a variety of sensors and aggregating the inputs to compare against the personal model, which can create additional options for normative behavior based on deviations that are detected. Appeal 2020-000278 Application 14/857,600 6 Claim 1 requires additional sensor values to be obtained in response to the personal predictive medical analytics system determining an anomaly is possible based on “at least some [of this] data from the one or more of the consumer product sensors.” After determining an anomaly is possible based on data from a consumer product sensor(s), the system “uses the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values.” Appeal Br. 27 (Claims App.). The Specification does not describe the personal predictive medical analytics system using the health model “to determine from which of the one or more consumer product sensors to obtain additional sensor values” when an anomaly is possible. Even if this feature is obvious from the descriptions cited by Appellant, that fact does not yield an adequate written description. The question is not whether a claimed invention is an obvious variant of that which is disclosed in the specification. Rather, a prior application itself must describe an invention, and do so in sufficient detail that one skilled in the art can clearly conclude that the inventor invented the claimed invention as of the filing date sought. Lockwood v. Am. Airlines, Inc., 107 F.3d 1565, 1571–72 (Fed. Cir. 1997) (“It is the disclosures of the applications that count. Entitlement to a filing date does not extend to subject matter which is not disclosed, but would be obvious over what is expressly disclosed.”); see Ariad, 598 F.3d at 1352. If anything, the Specification indicates that the personal predictive medical analytics system obtains additional sensor values from sensors other than the consumer product sensors that are monitored to determine that an anomaly is possible in the first place. The system does not obtain additional values from the same sensors as Appellant argues. See Appeal Br. 10–11. Appeal 2020-000278 Application 14/857,600 7 Appellant’s Figure 5 is reproduced below to illustrate this disclosure. Appellant’s Figure 5 depicts a flow chart to collect sensor inputs. Spec. ¶ 48. During ongoing operation, data is gathered from sensors 210 on smartphones, laptops, touch screens, mouses, and cameras. Id. ¶ 49. When deviations from the personal model of the user are detected, process 500 determines if an anomaly exists 520 by comparing the data to the personal model. Id. ¶ 51. “If there is a cause for worry, then process 500 checks other sensors 535 and asks again about worry 540.” Id. (emphasis added). Appeal 2020-000278 Application 14/857,600 8 Examples in the Specification also check sensors other than sensors that are monitored to determine if an anomaly is possible in the first place. If a consumer product sensor on a cell phone senses increases in heart rate, “system 200 can refer to other sources of information to determine if the user is at the gym or has scheduled a run with a friend.” Id. ¶ 33 (emphasis added). If the “secondary sources” of information do not rule out a medical issue, the user can be [asked] whether or not the user is exercising.” Id. In another example, “if a sensor senses that a user has gained body weight over a period of time,” “system 200 can refer to other sources of information to determine if the user has been regularly exercising and what foods the user has been eating.” Id. ¶ 34 (emphasis added). “If the secondary sources do not rule out unhealthy weight gain, then the user can be prompted with a question about whether or not the recent changes in weight are due to muscle or fat gain.” Id. (emphasis added). Similarly, personal predictive medical analytics system 215 receives input from available sensors (cloud 230) to detect problems. Id. ¶ 24. If a reading is outside the normal range, “the system 200 can make an attempt to ascertain whether it is a true symptom or caused by other, non-pathological factors . . . by gathering readings from available complementary sources (mobile devices, fixed cameras, ambient sensors). Sources include other personal (GPS in a phone . . .) or environmental (weather . . .) inputs.” Id. (emphasis added). Thus, different sensor input is obtained in response to determining an anomaly is possible. However, there is no description that the health model determines from which sensor(s) to obtain these values. Thus, we sustain the rejection of claims 1–20 for lack of written description. Appeal 2020-000278 Application 14/857,600 9 Claims 7 and 14 The Examiner determines that the Specification does not describe a personal predictive medical analytics system that “generates a predictive health risk profile” as claimed. Final Act. 3. The Specification describes process 400 determining a context beginning with a generic profile for the “norm” of a user based on age, gender, height, ethnicity, and geographic location. Id. ¶ 45. The system also can be trained by asking the user about various factors. Id. A personal model can be used as a filter to distinguish a user from a generic population profile as determined by statistical modeling. Id. ¶ 53. However, Appellant does not address the Examiner’s rejection of claims 7 and 14 for lack of a written description.2 See Hyatt v. Dudas, 492 F.3d 1365, 1377 (Fed. Cir. 2007) (holding that when a written description cannot be found in the specification, as filed, the only thing the PTO can reasonably be expected to do is to point out its non-existence). Thus, we summarily sustain the rejection of claims 7 and 14 for lack of written description. See 37 C.F.R. § 41.37(c)(1)(iv). We note Appellant’s arguments that the Examiner failed to establish a lack of enablement of claims 1–20 because the disclosure allows a skilled artisan to make and use the invention without undue experimentation. See Appeal Br. 12–15; Reply Br. 5–6. However, we find no such rejection in the Final Office Action from which appeal is taken, and the Examiner confirms in the Answer that no such rejection was made or is pending. See Ans. 4. Therefore, we do not address these arguments by Appellant. 2 In the Reply Brief, Appellant asserts that claims 7 and 14 were cancelled to address the rejection of those claims under 35 U.S.C. § 112(b). Reply Br. 7. Appeal 2020-000278 Application 14/857,600 10 Claims 7 and 14 for Indefiniteness The Examiner determines that claims 7 and 14 are indefinite because the meaning of “predictive health risk profile” is unclear in light of the Specification. Final Act. 4. The Examiner also finds that the Specification describes a health risk profile (Spec. ¶¶ 17, 18) and a generic user profile (id. ¶ 45), but it is unclear which of these profiles is considered to be a predictive health risk profile as claimed. Id. Appellant does not respond to this rejection. Appeal Br. 9–26. In the Reply Brief, Appellant indicates that claims 7 and 14 were cancelled to moot this rejection. Reply Br. 7. We find no record of this cancellation.3 Thus, we summarily sustain this rejection. See 37 C.F.R. § 41.37(c)(1)(iv). Patent Eligibility of Claims 1–20 Principles of Law Section 101 of the Patent Act states: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. § 101. This provision contains an implicit exception: “Laws of nature, natural phenomena, and abstract ideas are not patentable.” Alice Corp. Pty. v. CLS Bank Int’l, 573 U.S. 208, 216 (2014). 3 In Appellant’s Response to Final Office Action, Appellant proposed an amendment to claims 7 and 14 to delete language that the system “generates a predictive health risk profile” and instead recite that the system “based on a result of the comparison, predicts a health risk.” Response to Final Office Action, filed Dec. 19, 2018, at 3 (claim 7), 5 (claim 14). The Examiner did not enter that amendment. Adv. Action 1 (mailed Jan. 18, 2019). Therefore, claims 7 and 14 still include the language determined to be indefinite by the Examiner. Appeal 2020-000278 Application 14/857,600 11 To distinguish patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications, we first determine whether the claims are directed to a patent-ineligible concept. Id. at 217. If they are, we consider the elements of each claim, individually and “as an ordered combination,” to determine if additional elements “‘transform the nature of the claim’ into a patent-eligible application” as an “inventive concept” sufficient to ensure the claims in practice amount to significantly more than a patent on the ineligible concept itself. See id. at 217–18. The USPTO has issued guidance about this framework. 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (Jan. 7, 2019) (“Revised Guidance”). Under the Revised Guidance, to determine whether a claim is “directed to” an abstract idea, we evaluate whether the claim recites (1) any judicial exceptions, including certain groupings of abstract ideas listed in the Revised Guidance (i.e., mathematical concepts, certain methods of organizing human activities such as a fundamental economic practice, or mental processes); and (2) additional elements that integrate the judicial exception into a practical application (see MPEP §§ 2106.05(a)–(c), (e)–(h) (9th ed. rev. 08.2017 Jan. 2018) (“MPEP”)). Id. at 52–55. Only if a claim (1) recites a judicial exception and also (2) does not integrate that exception into a practical application, do we then consider whether the claim (3) adds a specific limitation beyond the judicial exception that is not “well-understood, routine, conventional” in the field (see MPEP § 2106.05(d)) or (4) simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Id. at 56. Appeal 2020-000278 Application 14/857,600 12 Examiner’s Determination The Examiner determines that the claims recite the abstract idea of certain methods of organizing human activity including managing personal behavior by following rules and instructions including a health model and sensor values used to determine if an anomaly is possible. Ans. 6. The Examiner determines that the claims recite generic computer components recited at a high level of generality that do not impose a meaningful limit on the judicial exception or integrate it into a practical application. Id. at 6–7. The Examiner determines that the claims recite consumer product sensors at a high level of generality without reciting actual collection of data or the use of the sensors in any particular manner. Id. at 8. The Examiner determines that the sensors perform well-understood, routine, and conventional activity of gathering data that is used in further analysis to determine if an anomaly exists without providing unconventional sensors or unconventional usage of conventional sensors. Id. 8–9. Appellant’s Contentions Appellant argues that the claims recite more than data collection and analysis because they recite determining a sensor to obtain additional sensor values, modifying a health model based on learned behavior, and generating predictive health risk profiles. Appeal Br. 16–17. Appellant also argues that the claims involve more than managing personal behavior through rules or instructions because the health model is trained using context data collected from consumer product sensors, and rules and instructions cannot be trained. See Reply Br. 8. Appellant further argues that the claimed use of sensors is similar to claims found to be patent eligible in Thales Visionix Inc. v. United States, 850 F.3d 1343 (Fed. Cir. 2017). Appeal Br. 17–21; Reply Br. 10–11. Appeal 2020-000278 Application 14/857,600 13 Appellant argues the claims as a group for which we select claim 1 as representative. 37 C.F.R. § 41.37(c)(1)(iv). We also address Appellant’s argument that the Examiner did not evaluate the eligibility of dependent claims 2–7, 9–14, and 16–20 properly. See Appeal Br. 22; Reply Br. 12–13. Step 1: Is Claim 1 Within a Statutory Category? Claim 1 recites a “health risk prediction system” which is within a statutory category of 35 U.S.C. § 101, namely, an apparatus. Therefore, we next consider whether claim 1 as a whole recites a judicial exception. Step 2A, Prong 1: Does Claim 1 Recite a Judicial Exception? We determine that claim 1 recites an abstract idea. The Revised Guidance enumerates this abstract idea as (1) certain methods of organizing the human activity—managing personal behavior by following rules or instructions and (2) mental processes––concepts performed in the human mind. See Revised Guidance, 84 Fed. Reg. at 52. The claims relate to using sensor inputs from non-medical devices to predict a health risk of a user. Spec. ¶ 1. The claimed system collects data from consumer product sensors regarding activities of a person/user and uses the data to train a health model, thereby organizing those human activities into a health model used to manage personal behavior as discussed in more detail below. Claim 1 does not recite how the model is trained, however. The claimed “data storage stores a health model that has been trained using context data collected from consumer product sensors made for a set of principally non-medical devices.” Appeal Br. 27 (Claims App.). This limitation receives sensed user activities as “context data” and organizes the data into a trained health model that is used to manage personal behavior by following rules or instructions embodied in the health model. Appeal 2020-000278 Application 14/857,600 14 “Context data” covers user activities used to train the health model, but the manner of training is not claimed. The Specification teaches that: Elements that define context can include: posture, circumstance, activity, and location. Posture can include determination of whether the user is sitting, standing still, walking, lying down, etc. Circumstance can include determination of whether the user is stationary or on a conveyance such as a car, bus elevator, plane, etc. Activity can include determination of whether a user is engaged in strenuous physical activity such as biking, jogging, hiking, etc. Location can include a determination of where the user is for example, at home, at work, at a public event, in an accident, etc. Spec. ¶ 44 (emphasis added). Consumer product sensors collect this activity in a variety of ways. “[A] computer or cell phone can collect information related to the posture, keystrokes, swipe strokes, look, gaze, position, and other information related to a user.” Id. ¶ 20. Other consumer devices such as cameras, cars, watches, exercise equipment, furniture, and computer systems can collect information about a user’s heart rate, calorie consumption, orientation, location, speed, exertion level, gaze, hand strength, posture, and the like. Id. Claim 1 then recites that the trained health model is used as part of “a personal predictive medical analytics system” that “monitors one or more of the consumer product sensors” and “based on the health model and at least some data from the one or more of the consumer product sensors, determines whether an anomaly that is health-related is possible.” Appeal Br. 27 (Claims App.). The system and health model provide rules to analyze data received from consumer product sensors to determine whether a person has a health-related anomaly requiring further action, e.g., by comparing the data to the user’s personal health model. See Spec. ¶¶ 29, 30, 33, 34, 51. Appeal 2020-000278 Application 14/857,600 15 The system is designed to predict health risks without the need for a patient to go to a medical doctor and undergo testing with expensive medical equipment. Spec. ¶¶ 2, 3. The system uses the health model and sensed data to predict (i.e., diagnose) whether the user has a health-related anomaly. An “anomaly” occurs when a sensed value from a consumer product sensor deviates from the user’s health model, e.g., by exceeding a threshold. “Whenever a deviation from the personal [health] model is detected, which exceeds a given threshold [wherein] the system translates the anomaly in sensor readings to a possible set of pathological symptoms.” Id. ¶ 53. This limitation involves a mental process that can be performed in the human mind by simply comparing a sensed value to a threshold in the health model to determine if the sensed value exceeds the threshold or not. See Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1354 (Fed. Cir. 2016) (“[W]e have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category.”). Similar to claim 1, the claim in Electric Power involved receiving sensed data measurements and analyzing the data based on “limits, sensitivities and rates of change for one or more measurements.” Id. at 1351–52 (citing claim 12 of the ’710 patent). So too, in Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367 (Fed. Cir. 2015), a method of storing a user identity profile in a database and then “tracking financial transactions to determine whether they exceed a pre-set spending limit” was determined to involve “methods of organizing human activity.” Here, determining whether a health-related anomaly is possible involves a similar concept of comparing sensed values to thresholds of a user health model/profile to see if thresholds are exceeded. Appeal 2020-000278 Application 14/857,600 16 This feature filters sensed data that is collected, which is an abstract idea. Bascom Global Internet Svcs., Inc. v. AT&T Mobility LLC, 827 F.3d 1341, 1348 (Fed. Cir. 2016) (“[F]iltering content is an abstract idea because it is a longstanding, well-known method of organizing human behavior.”). Next, the claimed system “uses the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values” “in response to determining that the anomaly is possible.” Then, the system “uses the health model and the additional sensor values to infer a context that is indicative of whether the anomaly exists.” Appeal Br. 27 (Claims App.). The context in which a possible anomaly is detected is important to predicting a health or medical risk accurately. See Spec. ¶ 43. These limitations recite the same abstract idea of organizing human activity and mental processes. The Specification describes this concept as: When a reading falls outside its normal range, the system 200 can make an attempt to ascertain whether it is a true symptom or caused by other, non-pathological factors. This attempt can be made by the system 200 by gathering readings from as many complementary sources (mobile devices, fixed cameras, ambient sensors) as they are available 210. These sources include other personal (GPS in a phone, health record, previously recorded data) or environmental (weather, pollution or other information available from online services or news/RSS feeds) inputs. . . . If none of the available readings can explain away the symptom, then all the information thus gathered (initial reading and complementary information) can feed into a diagnosis phase to determine a response 225. Spec. ¶ 24 (emphasis added). As discussed above for the written description rejection, the Specification does not describe how the health model is used to determine from which consumer product sensor(s) to obtain “additional sensor values.” Readings from some other sensors are obtained somehow. Appeal 2020-000278 Application 14/857,600 17 These limitations replicate mental processes of a doctor examining a patient. Spec. ¶¶ 1–3. Predictive analytics of medical diagnostic databases like WebMD are used with the health model to filter and distinguish a user from generic profiles. Id. ¶ 53. Similar claims to obtaining patient data and using knowledge bases of different treatment regimens with expert rules for evaluating and selecting the treatment regimes and advisory information to treat a patient involve mental steps that people can and regularly do perform in their heads. See SmartGene, Inc. v. Advanced Biological Labs., SA, 555 F. App’x 950, 954–55 (Fed. Cir. 2014); see also In re Meyer, 688 F.2d 789, 795–96 (CCPA 1982) (holding claims reciting an algorithm that represents a mental process that a neurologist should follow recite an abstract idea). Just as doctors ask questions or perform tests to obtain additional data about a possible anomaly that is detected in a patient, the claimed system obtains additional data from consumer product sensors to infer a context that helps the system predict a health risk more accurately. Spec. ¶ 43. Because the sensor data can be collected when a person is asleep, at work, exercising, walking, reading, sitting, standing, or running, an accurate determination of the context of the sensed data is important to predicting a problem such as a fall, car accident, heart attack or stroke. Id. Thus, if a sensor senses an increase in a user’s heart rate, system 200 can refer to other sources of information (GPS signal from a cell phone or a calendar entry) to determine if the user is at the gym or running. Id. ¶ 33. If system 200 determines a user is at the gym, then, the sudden increase can be explained as the user likely is exercising. Id. If a medical issue is not ruled out, the user is questioned about current activity. Id. A user’s response (or no response) can determine whether to enter the diagnostic phase or not. Id. Appeal 2020-000278 Application 14/857,600 18 If a sensor senses that a user has gained body weight over a period of time, system 200 can refer to other sources of information to determine if the user has been exercising regularly and what foods have been eaten. Id. ¶ 34. System 200 can obtain information from a GPS signal of a user’s cell phone, a user’s calendar entries, or user input from sensor 210 to assess whether the user was performing resistance training at the gym and has caloric intake in line with recommended limits. Id. If so, the change in weight has a possible explanation. Id. If secondary sources do not rule out unhealthy weight gain, the user can be asked if the weight gain is due to muscle or fat gain. Id. If the user responds that the weight gain is muscle mass, system 200 has an explanation other than a health issue for the sudden change in weight. Id. If the user indicates the weight gain is fat, system 200 enters a diagnostic phase to determine a response such as changing diet or exercise regimen. Id. ¶ 35. The system collects data from consumer product sensors about users’ activities, and the health model analyzes and organizes that data using rules that reflect clinical best practices to identify pathologies. Id. ¶¶ 29, 30, 39. The system can solicit user input about a possible anomaly or suggest action to take for an anomaly. Id. ¶¶ 48–56. Such activity involves mental steps. If an anomaly does not exist, the system returns to monitoring the sensors. If an anomaly exists, the system triggers an action. See Appeal Br. 27 (Claims App.). If the system detects a deviation that exceeds a threshold and maps to a pathology, the system sends a message to the user 610 asking questions about their current/recent activity. Id. ¶¶ 53–54, Fig. 6. If a user response indicates a possible condition or there is no response, the system may notify the user with text or voice messages. Id. ¶ 54. These limitations recite extra-solution activity and manage behavior by rules and instructions. Appeal 2020-000278 Application 14/857,600 19 The holding in Thales Visionix v. United States illustrates why claim 1 in this appeal recites an abstract idea that is not patent eligible. In Thales, the claim recited a system for tracking the motion of an object relative to a moving reference frame. Thales Visionix Inc. v. United States, 850 F.3d 1343, 1345 (Fed. Cir. 2017). In Thales, the claim recited “a first inertial sensor mounted on the tracked object,” “a second inertial sensor mounted on the moving reference frame,” and “an element adapted to receive signals from said first and second inertial sensors and configured to determine an orientation of the object relative to the moving reference frame based on the signals received from the first and second inertial sensors.” Id. The court held that the claims were directed to systems and methods that use inertial sensors in a non-conventional manner to reduce errors in measuring the relative position and orientation of a moving object on a moving reference frame. Id. at 1348–49. The patent specification described this use of the sensors as mitigating errors by eliminating calculations of inertia relative to the earth to allow the system to work with any type of moving platform. Id. at 1348. In contrast to the claim found to be patent-eligible in Thales, claim 1 here does not claim sensors or sensor arrangements. The system “monitors” consumer product sensors and, based on data from the sensors, determines if an anomaly exists. Appeal Br. 27 (Claims App.). Claim 1 does not recite a particular type or arrangement of sensors or structures. It merely receives data from consumer product sensors, which are described as generic sensors available on consumer devices. Spec. ¶¶ 23, 24, 27, 33. Similar to the claims in Electric Power, claim 1 here collects data from generic sensors and processes it in ways that involve mental processes. Appeal 2020-000278 Application 14/857,600 20 Nor does claim 1 here recite steps of training a health model using context data. Claim 1 simply recites that the health model “has been trained using context data collected from consumer product sensors” without any details of that process or other indication that it advances computers or other technology. See Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1317–18 (Fed. Cir. 2016) (holding claims to a database of business rules applied to email messages to determine a set of actions to be applied to the email message to control delivery of the email message recited an abstract idea because “with the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.”); Revised Guidance, 84 Fed. Reg. at 52 n.14; MPEP § 2106.04(a)(2). Eligibility of Dependent Claims The Examiner determined that the dependent claims recite the same abstract idea. Final Act. 5–6. Claims 3 and 10 “inquire[] further regarding the health risk.” Claim 4 “activates a response.” Claims 6 and 13 “modif[y] the health model based on learned behavior.” Claims 7 and 14 compare data of the health model and modified health model and “generate[] a predictive health risk profile.” Claim 18 “activat[es] an emergency response based on user input.” Claim 20 updates the health model “based on user behavior.” See id. Appellant’s mere assertion that the Examiner did not examine these claims (Appeal Br. 22; Reply Br. 12–13) is not persuasive of error. See In re Jung, 637 F.3d 1356, 1365 (Fed. Cir. 2011); 37 C.F.R. § 41.37(c)(1)(iv). Accordingly, we determine that claims 1–20 recite certain methods of organizing human activity and mental processes identified above. Revised Guidance, 84 Fed. Reg. at 52; MPEP § 2106.04(a)(2). Appeal 2020-000278 Application 14/857,600 21 Step 2A, Prong Two: Integration into a Practical Application We next consider whether claim 1 recites additional elements that integrate the abstract idea into a practical application. Revised Guidance, 84 Fed. Reg. at 54 (Revised Step 2A, Prong Two). As discussed in more detail below, we determine the claims do not recite any additional elements that are sufficient to integrate the abstract idea recited in claim 1 into a practical application. Any additional elements recited in claim 1 do not improve a computer or other technology. They do not implement the abstract idea in conjunction with a particular machine or manufacture that is integral to the claim. They do not transform or reduce a particular article to a different state or thing. They do not apply the abstract idea in a meaningful way beyond linking it to a particular environment of the abstract idea. Revised Guidance, 84 Fed. Reg. at 55 and MPEP sections cited therein). Appellant’s arguments that Thales supports a determination of patent eligibility are not persuasive. See Appeal Br. 17–20; Reply Br. 10–11. The claims in Thales are distinguishable from claim 1 here, because the claims in Thales positively recited systems and methods that “use inertial sensors in a non-conventional manner to reduce errors in measuring the relative position and orientation of a moving object on a moving reference frame.” Thales, 850 F.3d at 1348–49. One sensor was mounted on the tracked object. The other sensor was mounted on the moving reference frame. Id. at 1348. Although the claims used mathematical equations to determine the orientation of the object relative to the moving reference frame, that concept was implemented with a particular configuration of inertial sensors that was integral to the claims to determine an orientation of a moving object on a moving reference frame. Id. at 1348–49. The court explained: Appeal 2020-000278 Application 14/857,600 22 We hold that the ’159 patent claims at issue in this appeal are not directed to an abstract idea. The claims specify a particular configuration of inertial sensors and a particular method of using the raw data from the sensors in order to more accurately calculate the position and orientation of an object on a moving platform. Id. at 1349 (emphasis added). In contrast to the claims in Thales, here, claim 1 recites no particular configuration of consumer product sensors that is integral to the claim or that ties the abstract idea recited in claim 1 to a particular machine. Indeed, unlike the claims in Thales, claim 1 here does not even recite sensors as a limitation. Claim 1 instead “monitors one or more of the consumer product sensors” and “uses the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values.” Appeal Br. 27 (Claims App.) (emphasis added). Claim 1 collects data from one or more consumer product sensors in an unspecified manner. The sensors are recited generically in no particular arrangement as part of generic consumer products. The “personal predictive medical analytics system” “monitors one or more of these sensors” and uses data from the sensor(s) in an unspecified way to determine if an anomaly is possible. These mental steps do not improve sensors or computers. The Specification describes consumer product sensors generically as high accuracy GPS and accelerometers of a tablet, smartphone, or camera. Spec. ¶¶ 23, 26, 27, 47, Fig. 2 (sensors 210, 235, 240). The Specification describes consumer product sensors as everyday devices such as phones, cars, office equipment, computers, exercise equipment, tablets, televisions, and cameras that can sense information about a user. Id. ¶ 27. Appeal 2020-000278 Application 14/857,600 23 Claim 1 does not recite a configuration of consumer product sensors, which simply collect data about a user and thus are not integral to claim 1. The Specification describes the consumer products and sensors generically. Consumer products 105 can be any consumer products that can collect information about a user. Examples of consumer products include, smart phones, computers, cameras, cars, handheld electronic devices, tablets, watches, exercise equipment, furniture, appliances, and other device that has the ability to sense information about a user. For example, a computer or cell phone can collect information related to the posture, keystrokes, swipe strokes, look, gaze, position, and other information related to a user. Other consumer products can also collect information about a user, for example, cameras, cars, watches, exercise equipment, furniture, computer systems, etc. These devices can collect information about a user’s heart rate, calorie consumption, orientation, location, speed, exertion level, gaze, hand strength, posture, etc. Information can be continuously collected from these consumer products 105. Spec. ¶ 20. Nor is data from consumer product sensors tied to a concrete, tangible structure or element as in Thales. Instead, data is collected and analyzed largely using mental processes as identified under Prong One. “Information as such is an intangible” and collecting, analyzing, and displaying that information, without more, is an abstract idea. See Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1344–45 (Fed. Cir. 2018) (quoting Elec. Power Grp., 830 F.3d at 1353–54 and citing similar decisions holding that displaying different types of information from various sources on a generic display is abstract absent a specific improvement to the way computers operate); In re TLI Commc’ns LLC Patent Litig., 823 F.3d 607, 613 (“It is well-settled that mere recitation of concrete, tangible components is insufficient to confer patent eligibility to an otherwise abstract idea.”). Appeal 2020-000278 Application 14/857,600 24 The “data storage,” “health model,” and “personal predictive medical analytics system” also are recited as generic elements in claim 1. They are described in the Specification generically as well. For example, collected information can be stored in “data aggregation and storage 115,” which is depicted as a block in the block diagram of the health risk prediction system. Id. ¶¶ 19, 21, Fig. 1. Data stored in the generic data aggregation and storage 115 can be used with a generic health model 125. The health model can be derived from advanced analytics and decision optimization technology. Id. ¶ 23. “Advanced analytics such as statistics, data mining, and text mining can evaluate the available data inputs with bias towards those known to contribute to certain conditions” and “[t]he model can consider data inputs available for a particular user.” Id. The “personal predictive medical analytics system 215” can be used by the health prediction system 200 to receive input from available sensors 230. Id. ¶ 24, Fig. 2. Sensor 210 can take user data or user information and feed it into personal predictive medical analytics system 215. Id. ¶ 26. The personal predictive medical analytics system 215 takes that information and predicts a possible health issue by starting with a known health model based on the general population and constructing an individualized model using advanced analytics such as k-Nearest Neighbor, Recursive Partitioning, Neural Networks, Self-organizing feature maps, or Model ensembles that combine two or more models together. Id. ¶ 28. These elements are claimed and described as generic elements. The sensors collect information about a user. Data storage stores a health model that has been trained using context data in an unspecified way. The system monitors the sensors and performs the abstract idea as identified above. Appeal 2020-000278 Application 14/857,600 25 Personal predictive medical system 300 includes processor 320, which can be any processor that functions to execute health model 325 or execute advanced analytics engine 340 for statistics, data mining, and text mining. Id. ¶ 39. The processor can be one found in servers in cloud environments, virtual machine servers, or personal computing environments. Id. Appellant also argues that the personal predictive medical analytics system uses new, useful techniques to access non-medical sensor data from non-medical devices and uses the data in a non-conventional manner to train a health model and generate a health risk profile useful for diagnosis and other health-related purposes. Appeal Br. 21. If new techniques are used to access or process sensor data, they are not claimed. The techniques recited in claim 1 are recited generically as an abstract idea identified in Prong One. “It has been clear since Alice that a claimed invention’s use of the ineligible concept to which it is directed cannot supply the inventive concept that renders the invention ‘significantly more’ than that ineligible concept.” BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1290 (Fed. Cir. 2018); see id. at 1291 (“As a matter of law, narrowing or reformulating an abstract idea does not add ‘significantly more’ to it.”); see also RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327 (Fed. Cir. 2017) (“Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract.”); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (“But, a claim for a new abstract idea is still an abstract idea.”); Versata Dev. Grp., Inc. v. SAP Am., Inc., 793 F.3d 1306, 1335 (Fed. Cir. 2015) (holding claims that improved an abstract idea but did not recite the supposed computer improvements were not patent eligible); see also Revised Guidance, 84 Fed. Reg. at 55 n.24. Appeal 2020-000278 Application 14/857,600 26 In short, these elements provide a generic environment in which to execute the abstract idea. They are used as tools to implement the abstract idea without improving computers or other technology. If there is a new or useful technique described in the Specification that advances technology of sensors, health models, networks, or computers, the advance is not recited in claim 1 and therefore cannot form a basis for patent-eligibility. See Ericsson Inc. v. TCL Commc’n Tech. Holdings Ltd., 955 F.3d 1317, 1325 (Fed. Cir. 2020) (holding that the specification must always yield to the claim language when identifying the true focus of a claim); see also Two-Way Media Ltd. v. Comcast Cable Commc’ns, LLC, 874 F.3d 1329, 1340 (Fed. Cir. 2017) (“As with claim 1 of the ’187 patent, the problem is that no inventive concept resides in the claims.”). Even the recitation of concrete, tangible components is not sufficient to make abstract ideas performed on or with the components patent-eligible. Alice, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). Here, the focus of claim 1 is not on technological advances in sensors, health models, or computers. Rather, the focus is on performing an abstract idea using computers as a tool. See Enfish, 822 F.3d at 1335–36; Bancorp, 687 F.3d at 1278 (“[T]he use of a computer in an otherwise patent-ineligible process for no more than its most basic function––making calculations or computations––fails to circumvent the prohibition against patenting abstract ideas and mental processes.”). Furthermore, the claimed data storage of a health model trained using context data collected from consumer product sensors and triggering an action if an anomaly is present recites insignificant extra-solution activity. Appeal 2020-000278 Application 14/857,600 27 It is well-settled that mere data gathering and other extra-solution activity does not integrate an abstract idea into a practical application. See Revised Guidance, 84 Fed. Reg. at 55 & n.33; MPEP § 2106.05(g); Elec. Power, 830 F.3d at 1355 (“But merely selecting information, by content or source, for collection, analysis, and display does nothing significant to differentiate a process from ordinary mental processes.”). The claimed personal predictive medical analytics system and health model attempt to replicate methods used by doctors and medical personnel to examine a patient (i.e., collect data) and diagnose a risk without the need to visit the doctor or undergo expensive medical testing. See Spec. ¶¶ 1–3, 29 (rules for clinical best practices), 30, 39 (rules for clinical best practices). Our reviewing court held similar claims to integrating physiological treatment data received from a machine, converting it to a specific format, analyzing it, and presenting results recited an abstract idea. Univ. of Florida Research Found., Inc. v. General Electric Co., 916 F.3d 1363, 1366–67 (Fed. Cir. 2019). The court characterized the claim as “a quintessential ‘do it on a computer’ patent: it acknowledges that data from bedside machines was previously collected, analyzed, manipulated, and displayed manually, and it simply proposes doing so with a computer.” Id. at 1367. Appellant’s contention that claim 1 uses new and useful techniques to access non-medical sensor data from non-medical devices and use the data in a non-conventional manner in a trained health model is not persuasive because claim 1 does not recite any such new or useful techniques in the way data is used or computers operate. See id. (“The ’251 patent nowhere identifies, and we cannot see in the claims, any ‘specific improvement to the way computers operate.’”) (citation omitted). Appeal 2020-000278 Application 14/857,600 28 These limitations recite abstract ideas not technological advances. See Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d at 1371 (“Requiring the use of a ‘software’ ‘brain’ ‘tasked with tailoring information and providing it to the user’ provides no additional limitation beyond applying an abstract idea, restricted to the Internet, on a generic computer.”); Bancorp Servs., LLC v. Sun Life Assur. Co. of Canada (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“To salvage an otherwise patent-ineligible process, a computer must be integral to the claimed invention, facilitating the process in a way that a person making calculations or computations could not.”); see also Univ. of Florida, 916 F.3d at 1368 (“The ’251 patent ‘fails to provide any technical details for the tangible components, . . . instead predominantly describe[ing] the system and methods in purely functional terms.’”) (citation omitted). Accordingly, we determine that claim 1 does not include additional elements that integrate the abstract idea into a practical application. Step 2B: Does Claim 1 Include an Inventive Concept? We next consider whether claim 1 recites elements, individually, or as an ordered combination, that provide an inventive concept. Alice, 573 U.S. at 217–18. The second step of the Alice test is satisfied when the claim limitations involve more than performance of well-understood, routine, and conventional activities previously known to the industry. Berkheimer v. HP Inc., 881 F.3d 1360, 1367 (Fed. Cir. 2018); see Revised Guidance, 84 Fed. Reg. at 56 (explaining that the second step of the Alice analysis considers whether a claim adds a specific limitation beyond a judicial exception that is not “well-understood, routine, conventional” activity in the field). Appeal 2020-000278 Application 14/857,600 29 Appellant’s principal argument is that the claims use sensors in a non- conventional manner as in Thales and therefore are directed to a new and useful technique. Appeal Br. 21; Reply Br. 11–12. Appellant also argues that a skilled artisan would understand that the claim involves more than well-understood, routine, and conventional steps. Appeal Br. 21. As discussed above, claim 1 does not recite sensors or an arrangement of sensors. Instead, claim 1 monitors one or more consumer product sensors and based on the health model and at least some data from the one or more consumer product sensors determines whether an anomaly is possible. Thus, this argument is not commensurate with the scope of claim 1. Individually, the limitations of claim 1 recite aspects of the abstract idea identified above or insignificant extra-solution activity discussed above. As an ordered combination, the limitations add nothing that is not the sum of the individual parts. Even if the features are groundbreaking, innovative, or brilliant, that is not enough for patent eligibility. See Ass’n for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 591 (2013); accord SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163 (Fed. Cir. 2018) (“No matter how much of an advance in the finance field the claims recite, the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm. An advance of that nature is ineligible for patenting.”). Accordingly, we determine that claim 1 does not recite any elements, individually or as an ordered combination, that provide an inventive concept sufficient to transform the abstract idea into patent eligible subject matter. Thus, we sustain the rejection of claim 1 and claims 2–20, which fall with claim 1, as directed to a judicial exception under 35 U.S.C. § 101. Appeal 2020-000278 Application 14/857,600 30 Claims 1–11, 13, 14, and 18–20 Rejected over Syed and Zhang Appellant argues the claims as a group. See Appeal Br. 22–26. We select claim 1 as representative. See 37 C.F.R. § 41.37(c)(1)(iv). Regarding claim 1, the Examiner finds that Syed teaches the system as claimed except for training the health model with context data, collecting data from non-medical devices, and, in response to determining an anomaly is possible, using the health model to determine from which one of the consumer product sensors to obtain additional sensor values. The Examiner relies on Zhang to teach these features. Final Act. 9–12. The Examiner determines that it would have been obvious to combine the concept of using sensor data from consumer products to collect data and determine if an anomaly exists as taught by Zhang with Syed’s system to determine if there is a health risk for a user in Syed in order to notify emergency personnel of an emergency affecting the user as Zhang teaches to do. Id. at 12. Appellant argues that the Examiner relies on Syed to teach using the health model to determine from which of the one or more consumer product sensors to obtain additional sensor values, and Syed does not teach this. Appeal Br. 23. This argument is not persuasive because the Examiner relies on Zhang, not Syed, to teach this limitation. See Final Act. 10. Appellant argues that non-medical sensor data collected by Zhang’s wrist-worn fall detector cannot be used to train Syed’s health model and no motivation exists to combine different types of sensor data. Appeal Br. 24. This argument is not persuasive because Zhang and Syed use training data to train a health model. Zhang ¶ 62; Syed ¶¶ 21, 141. Syed teaches that other exemplary data sources may be used for training data. Syed ¶¶ 21, 141. Appeal 2020-000278 Application 14/857,600 31 The Examiner also proposes to modify Syed with Zhang’s teaching to select additional sensor values to confirm whether an anomaly is possible to improve Syed similarly. “[I]f a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 417 (2007). We are not persuaded by Appellant’s attorney argument that a skilled artisan would not be able to combine the teachings of Syed and Zhang as the Examiner proposes without undue experimentation. See Appeal Br. 24–25; Reply Br. 13–14. First, unsupported attorney argument is an inadequate substitute for record evidence. See Becton, Dickinson and Co. v. Tyco Healthcare Grp., LP, 616 F.3d 1249, 1260 (Fed. Cir. 2010). Second, Zhang teaches how to select additional sensor data to confirm whether or not an anomaly (i.e., a fall) has occurred. See Zhang ¶¶ 64, 221. Appellant’s arguments do not address such teachings of Zhang that are cited by the Examiner and thus do not apprise us of Examiner error in this regard. See Jung, 637 F.3d at 1365; 37 C.F.R. § 41.37(c)(1)(iv). Third, the level of disclosure in Syed and Zhang is in at least as much detail as Appellant provides in the Specification. Appellant’s Specification describes this feature in conceptual terms without any technical details thus indicating that a skilled artisan would understand how to make and use this feature and the level of ordinary skill in the art is sufficient to enable Syed and Zhang’s description of this feature. See Spec. ¶¶ 33–36, 42–51. Indeed, Syed teaches to train a model with training data that may use different data sources. Syed ¶¶ 21, 141. Appeal 2020-000278 Application 14/857,600 32 As our reviewing court held in a similar context: [T]he Board’s observation that appellant did not provide the type of detail in his specification that he now argues is necessary in prior art references supports the Board’s finding that one skilled in the art would have known how to implement the features of the references and would have concluded that the reference disclosures would have been enabling. In re Epstein, 32 F.3d 1559, 1568 (Fed. Cir. 1994) (quoted in In re Publicover, Appeal 2019-1883, 2020 WL 2510411, *4 (Fed. Cir. May 15, 2020) (“But as the examiner and Board correctly found, Publicover’s specification is just as sparse on how a system would identify this type of eye movement. Under the circumstances, we find this attorney argument as to the capabilities of a skilled artisan unpersuasive.”)). The combined teachings of Syed and Zhang render obvious claim 1 to include the limitation of “us[ing] the health model and the additional sensor values to infer a context that is indicative of whether the anomaly exists.” In this regard, Zhang teaches that device 108 uses location information as an additional means of confirmatory data to avoid false positive fall detections. Zhang ¶ 61. Zhang explains that when changes in acceleration or altitude data are indicative of a fall, location information may be used to determine whether the altitude or acceleration data is a false indication or not, e.g., when the user is in a vehicle accelerating down a hill. Id. The Examiner proposes to improve Syed’s system similarly by using sources of context data that are indicative of whether a potential anomaly exists. Ans. 9–11; Final Act. 12. Thus, we sustain the rejection of claim 1 and claims 2–11, 13, 14, and 18–20, which fall with claim 1. Appeal 2020-000278 Application 14/857,600 33 Claims 12, 16, and 17 Rejected over Syed, Zhang, and Ricci Appellant does not present arguments for this rejection. See Appeal Br. 26. Therefore, we summarily sustain this rejection. CONCLUSION In summary: Claims Rejected 35 U.S.C. § Reference(s)/ Basis Affirmed Reversed 1–20 112(a) Written Description 1–20 7, 14 112(b) Indefiniteness 7, 14 1–20 101 Eligibility 1–20 1–11, 13, 14, 18–20 103 Syed, Zhang 1–11, 13, 14, 18–20 12, 16, 17 103 Syed, Zhang, Ricci 12, 16, 17 Overall Outcome 1–20 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation