Amarnag Subramanya et al.Download PDFPatent Trials and Appeals BoardJun 2, 20202018008466 (P.T.A.B. Jun. 2, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/801,598 03/13/2013 Amarnag Subramanya 0058-573001 5572 79318 7590 06/02/2020 BRAKE HUGHES BELLERMANN LLP P.O. Box 1077 Middletown, MD 21769 EXAMINER KHOSHNOODI, FARIBORZ ART UNIT PAPER NUMBER 2157 NOTIFICATION DATE DELIVERY MODE 06/02/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): docketing@brakehughes.com uspto@brakehughes.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte AMARNAG SUBRAMANYA, FERNANDO PEREIRA, NI LAO, JOHN BLITZER, and RAHUL GUPTA Appeal 2018-008466 Application 13/801,598 Technology Center 2100 Before JOHN A. JEFFERY, SCOTT B. HOWARD, JASON M. REPKO, Administrative Patent Judges. REPKO, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Under 35 U.S.C. § 134(a), Appellant appeals from the Examiner’s decision to reject claims 1–18 and 26–30.1 We have jurisdiction under 35 U.S.C. § 6(b). We REVERSE. CLAIMED SUBJECT MATTER The invention generates search results from natural-language queries. Spec. ¶ 24. To accomplish this, the invention maps the queries to a data graph. Id. ¶ 3. A data graph stores data and rules that describe knowledge. 1 We use the word Appellant to refer to applicant as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as Google LLC. Appeal Br. 3. Appeal 2018-008466 Application 13/801,598 2 Id. ¶ 1. Its basic unit is a tuple. Id. Tuples represent facts, such as “Maryland is a state in the United States.” Id. A data graph’s nodes represent people, places, things, and concepts. Id. Edges link related nodes. Id. For example, the nodes “Maryland” and “United States” may be linked by the edges “in country” or “has state.” Id. According to the Specification, building a data graph is typically a manual process, which can be slow and error prone. Id. To address this problem, the invention automates the graph-building process using machine learning. Id. ¶¶ 31–33. Claims 1 and 11 are independent. Claim 1 is reproduced below. 1. A computer-implemented method, the method comprising: receiving, using at least one processor, a machine learning module trained to produce a model with multiple weighted features for a particular query, each weighted feature representing a path in a data graph and the weight of the feature being a probability of predicting a correct answer using the path; receiving a search query that includes a first search term; mapping the search query to the particular query; mapping the first search term to a first entity in the data graph; identifying, using the at least one processor, a second entity in the data graph using the first entity and at least one of the multiple weighted features; and providing, using the at least one processor, information relating to the second entity in a response to the search query. Appeal Br. 30.2 2 Throughout this opinion, we refer to the Final Office Action (“Final”), mailed August 18, 2017; Advisory Action (“Advisory”), mailed March 16, 2018; the Appeal Brief (“Appeal Br.”), filed March 12, 2018; Advisory Action (“Advisory II”), mailed March 16, 2018; the Examiner’s Answer Appeal 2018-008466 Application 13/801,598 3 REFERENCES The Examiner relies on the references in the table below. Name Reference Date Barker US 6,141,659 Oct. 31, 2000 Jost US 2002/0055916 A1 May 9, 2002 Cameron US 7,430,552 B2 Sept. 30, 2008 Strehl US 2010/0268710 A1 Oct. 21, 2010 Kenthapadi US 2010/0318546 A1 Dec. 16, 2010 Byrne US 2013/0080461 A1 Mar. 28, 2013 Li US 2013/0226846 A1 Aug. 29, 2013 REJECTIONS The Examiner rejects claims 1, 2, 7–11, 27, 28, and 30 under 35 U.S.C. § 103 as unpatentable over Byrne, Jost, and Strehl. Final 2–10. The Examiner rejects claims 3, 12, 13, and 18 under 35 U.S.C. § 103 as unpatentable over Byrne, Jost, Strehl, and Kenthapadi. Final 10–13. The Examiner rejects claims 4–6 and 14–17 under 35 U.S.C. § 103 as unpatentable over Byrne, Jost, Strehl, Kenthapadi, and Cameron. Final 13– 17. The Examiner rejects claims 26 and 29 under 35 U.S.C. § 103 as unpatentable over Byrne, Jost, Strehl, and Barker.3 Final 20–22. (“Ans.”), mailed June 22, 2018; and Reply Brief (“Reply Br.”), filed August 22, 2018. 3 In the Final Rejection, the Examiner rejected claim 25. Final 17–20. But claim 25’s rejection is not before us because Appellant later canceled claim 25 in an amendment entered by the Examiner after the final rejection. See Advisory II 1, item 7. Appeal 2018-008466 Application 13/801,598 4 OPINION The Rejection of Claims 1, 2, 7–10, 27, and 28 over Byrne, Jost, and Strehl In the rejection of claim 1, the Examiner finds that Byrne teaches all limitations except for the recited weights and the machine-learning module. Final 2–4. For these limitations, the Examiner turns to Jost and Strehl. Id. at 3–4. In particular, the Examiner concludes that, at the time of the invention, it would have been obvious for a skilled artisan (1) to represent Jost’s probabilities as Byrne’s weights, and (2) to use Strehl’s machine- learning module in Byrne’s system. Id. at 3–5. As for the recited step of “mapping the search query to the particular query,” the Examiner finds that Byrne maps queries to known elements in a graph. Ans. 5–6 (citing Byrne ¶¶ 19, 21); see also Final 3 (citing Byrne ¶¶ 27, 39). Appellant’s Arguments Appellant argues that Byrne lacks a query-to-query mapping because Byrne’s graph is not a query, nor does it represent a query. Appeal Br. 16. In Appellant’s view, the claim requires two queries: a search query and a particular query. Id. According to Appellant, Byrne discovers indirect paths between entities, but the claimed invention trains a machine-learning module on the particular query, then maps search queries to that query. Id.; Reply Br. 5. Appellant makes other arguments. See Appeal Br. 8–21; Reply Br. 2– 5. But we find this argument dispositive of all issues on appeal. Issue Under § 103, has the Examiner erred in finding that Byrne maps a search query to the recited particular query? Appeal 2018-008466 Application 13/801,598 5 Analysis Claim 1 recites, in part, “a machine learning module trained to produce a model with multiple weighted features for a particular query.” Appeal. Br. 30 (emphasis added). Claim 1 further recites, in part, “mapping the search query to the particular query.” Id. (emphasis added). That is, claim 1 requires a query-to-query mapping. As for the particular query, the Examiner finds that Byrne lacks the machine-learning module4 but that Byrne nevertheless produces a model for a particular query. Final 3. The Examiner explains that Byrne’s module searches for important keywords in the natural-language question. Id. (citing Byrne ¶ 37). In the Examiner’s view, this search is the particular query. See id. (“The module searches (query) . . .”). But the Examiner does not sufficiently explain how another query is mapped to Byrne’s module search. Id. In the Advisory Action, the Examiner finds that Byrne maps exemplary query “Customer Address” to a known reference to a similar business term in the graph. Advisory 2 (citing Byrne ¶ 21). Appellant, however, points out that Byrne’s graph is not a particular query. Appeal Br. 16. We agree. Byrne’s graph represents the data set to be searched, not any particular query. See Byrne ¶¶ 18–19, 24, 37. Specifically, Byrne uses a pre-calculated weighted, semantic graph. Id. ¶ 37. This graph is created by harvesting (e.g., importing) metadata sources, models, and other items that Byrne calls “information blueprints.” Id. ¶ 24. The harvesting creates a Resource 4 The Examiner finds that Strehl teaches a machine-learning module. Final 4. Appeal 2018-008466 Application 13/801,598 6 Description Framework (RDF) graph. Id. ¶ 19. Byrne calls the RDF graph “a pool of knowledge.” Id. ¶ 18. In one embodiment, the graph is a union of architectural views of an IT infrastructure’s state. Id. ¶ 19. To use the graph, Byrne maps a question to the graph’s elements. Id. ¶ 21. For example, “Customer Address” is mapped to similar business terms in the graph. Id. The word “who” in the query is mapped to an IT project. Id. Here, the Examiner has not shown that either the business terms or the IT project is a particular query. Advisory 2. In the Answer, the Examiner explains that Byrne allows more than one query. Ans. 5. Indeed, Byrne lets the user issue multiple queries through an interface. Byrne ¶ 19, cited in Ans. 5. Yet the claim requires mapping one query to another. The Examiner has not shown that Byrne’s queries are related in this way. Instead, Byrne maps the queries to the pre-calculated graph. See id. ¶ 37. Thus, the Examiner erred in finding that Byrne teaches or suggests “mapping the search query to the particular query.” Appeal Br. 16. Because this issue is dispositive of the rejection’s error, we need not address Appellant’s other arguments. On this record, we do not sustain the rejection of claim 1 and dependent claims 2, 7–10, 27, and 28, for similar reasons. The Rejection of Claims 11 and 30 over Byrne, Jost, and Strehl Similar to claim 1’s mapping step, claim 11 recites, in part, “receiving a user request matching the query.” Appeal Br. 32. That is, instead of reciting mapping generally, claim 11 requires a particular type of mapping— i.e., one that matches a request to a query. Appeal 2018-008466 Application 13/801,598 7 As in the rejection of claim 1, the Examiner explains that Byrne’s module searches for important keywords in the natural-language question. Compare Ans. 8 (citing Byrne ¶ 37), with Final 3 (citing Byrne ¶ 37). In the Examiner’s view, this search is the particular query. See Ans. 8 (“The module searches (query) . . .”). Appellant argues that Byrne does not suggest that the prior knowledge of the terms associated with the graph are related to a query. Appeal Br. 32. We agree. Rather, as discussed above, Byrne’s module searches a graph to answer the user’s natural-language question. See Byrne ¶ 37, cited in Final 7. For instance, Byrne finds graph nodes matching the user’s request. Id. ¶ 39, cited in Final 7. To complete the search, Byrne then finds paths between the nodes. Id. ¶ 40. So in the relied-upon paragraphs, Byrne describes searching the graph, instead of matching a request to a query. See id. ¶¶ 37, 39, cited in Final 7. In this way, the Examiner erred in finding that that Byrne teaches or suggests “receiving a user request matching the query.” Because this issue is dispositive of the rejection’s error, we need not address Appellant’s other arguments. Thus, we do not sustain the rejection of claim 11 and dependent claim 30, for similar reasons. The Rejection over Byrne, Jost, Strehl, and Kenthapadi In rejecting claims 3, 12, 13, and 18, the Examiner cites Kenthapadi for the limited purpose of showing that training the machine-learning module by generating noisy query answers was known. Final 10–13. Appeal 2018-008466 Application 13/801,598 8 Because the Examiner has not shown that Kenthapadi cures the above-noted deficiencies, we also do not sustain the obviousness rejection of claims 3, 12, 13, and 18. The Rejection over Byrne, Jost, Strehl, Kenthapadi, and Cameron In rejecting claims 4–6 and 14–17, the Examiner cites Cameron for the limited purpose of showing that selecting the positive and negative training examples was known. Final 13–20. Because the Examiner has not shown that Cameron cures the above-noted deficiencies, we also do not sustain the obviousness rejection of claims 4–6 and 14–17. The Rejection over Byrne, Jost, Strehl, and Barker In rejecting claims 26 and 29, the Examiner cites Barker for the limited purpose of teaching the recited templates. Final 20–22. Because the Examiner has not shown that Barker cures the above-noted deficiencies, we also do not sustain the obviousness rejection of claims 26 and 29. CONCLUSION The Examiner’s decision rejecting claims 1–18 and 26–30 is reversed. DECISION SUMMARY Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 2, 7–11, 27, 28, 30 103 Byrne, Jost, Strehl 1, 2, 7–11, 27, 28, 30 3, 12, 13, 18 103 Byrne, Jost, Strehl, Kenthapadi 3, 12, 13, 18 Appeal 2018-008466 Application 13/801,598 9 4–6, 14–17 103 Byrne, Jost, Strehl, Kenthapadi, Cameron 4–6, 14–17 26, 29 103 Byrne, Jost, Strehl, Barker 26, 29 Overall Outcome 1–18, 26– 30 REVERSED Copy with citationCopy as parenthetical citation