Google LLCDownload PDFPatent Trials and Appeals BoardMay 1, 202014478033 - (D) (P.T.A.B. May. 1, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/478,033 09/05/2014 Nicolaus Todd Mote 16113-6369001 1034 26192 7590 05/01/2020 FISH & RICHARDSON P.C. PO BOX 1022 MINNEAPOLIS, MN 55440-1022 EXAMINER KIM, JONATHAN C ART UNIT PAPER NUMBER 2659 NOTIFICATION DATE DELIVERY MODE 05/01/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): PATDOCTC@fr.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte NICOLAUS TODD MOTE and RYAN PATRICK DOHERTY ____________________ Appeal 2018-006621 Application 14/478,033 Technology Center 2600 ____________________ Before LARRY J. HUME, JUSTIN BUSCH, and MATTHEW J. McNEILL, Administrative Patent Judges. BUSCH, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1–7 and 11–20, which constitute all the claims pending in this application. We have jurisdiction over the pending claims under 35 U.S.C. § 6(b). We AFFIRM. 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Google Inc. Appeal Br. 1. Appeal 2018-006621 Application 14/478,033 2 CLAIMED SUBJECT MATTER Appellant’s disclosure generally “relates to using features of a textual term to classify the textual term.” Spec. ¶ 2. More specifically, the claimed subject matter relates to: determining a vector representing phonetic features for an obtained textual term having an unknown definition, comparing the unknown terms’ vector with reference vectors representing phonetic features for textual terms having known definitions, classifying the unknown term by generating a word score vector, and providing the classified term as an input to a word model that generates representation vectors based on the classified term. See Spec. ¶¶ 4, 20, 22, 35–37, 51, 52, 54, 55, 57, 58, Figs. 1, 4. The word score vector includes a score for each respective reference term based on the similarity of the term to the respective reference term and indicative of the probability the unknown term’s definition corresponds to the respective reference term’s definition. Spec. ¶ 36. Representative claim 1 is reproduced below: 1. A computer-implemented method, comprising: obtaining an unknown textual term, wherein the unknown textual term is a textual term that has an unknown dictionary definition; determining, by one or more computers, an unknown term vector representing one or more phonetic features of the unknown textual term; comparing the unknown term vector representing the one or more phonetic features of the unknown textual term to each of a plurality of reference vectors in a vector space, wherein each reference vector represents a reference textual term having a known definition; and classifying the unknown textual term based on the comparison of the unknown term vector with each of the plurality of reference vectors, wherein classifying the unknown textual term comprises: Appeal 2018-006621 Application 14/478,033 3 determining a level of similarity between the unknown textual term vector and each of the plurality of reference vectors; and generating a classified textual term that includes a word score vector comprised of a plurality of fields that each correspond to a respective reference textual term having a known dictionary definition, wherein generating the classified textual term further includes determining a score for each respective field of the word score vector that is (i) based on the similarity determination of the unknown textual vector and the reference vector that represents the respective reference textual term that is associated with the respective field, and (ii) indicative of the probability that the definition of the unknown textual term corresponds to the definition of the respective reference textual term that is associated with the respective field; and providing the classified textual term as an input to a word model, wherein the word model is configured to process the classified textual term and generate one or more representation vectors, based on the classified textual term. REJECTIONS Claims 12–16 stand rejected under 35 U.S.C. § 101 as directed to non- statutory subject matter. Final Act. 4. Claims 1–5, 12, 13, 17, and 18 stand rejected under 35 U.S.C. § 103 as obvious in view of Olds (US 2008/0046405 A1; Feb. 21, 2008) and Anderson (US 2013/0124474 A1; May 16, 2013). Final Act. 4–12. Claims 6, 14, and 19 stand rejected under 35 U.S.C. § 103 as obvious in view of Olds, Anderson, and Paul Cook & Suzanne Stevenson, Automatically Identifying the Source Words of Lexical Blends in English, 36:1 ASSN. FOR COMPUTATIONAL LINGUISTICS 129–149 (2010) (“Cook”). Final Act. 12–15. Appeal 2018-006621 Application 14/478,033 4 Claims 7 and 15 stand rejected under 35 U.S.C. § 103 as obvious in view of Olds, Anderson, and Charlesworth (US 2002/0052740 A1; May 2, 2002). Final Act. 15–16. Claims 11, 16, and 20 stand rejected under 35 U.S.C. § 103 as obvious in view of Olds, Anderson, and Liu (US 2013/0173258 A1; July 4, 2013). Final Act. 16–18. ANALYSIS We have reviewed the Examiner’s rejections in light of Appellant’s arguments that the Examiner erred. In reaching this decision, we have considered all evidence presented and all arguments Appellant made. Arguments Appellant could have made, but chose not to make in the Briefs, are deemed waived. See 37 C.F.R. § 41.37(c)(1)(iv). With respect to the rejections under 35 U.S.C. § 103, Appellant argues the rejection of all pending claims as a group. See Appeal Br. 13 (arguing independent claims 12 and 17 recite features commensurate in scope to those recited in claim 1, and the rejections of all claims should be reversed for the reasons asserted with respect to independent claim 1). We select claim 1 as representative. See 37 C.F.R. § 41.37(c)(1)(iv). THE 35 U.S.C. § 101 REJECTION OF CLAIMS 12–16 The Examiner rejects claims 12–16 under 35 U.S.C. § 101 as directed to non-statutory subject because the claims encompass transitory signals. Final Act. 4. Appellant filed an amendment to claims 12–16, but that amendment was not entered. See Advisory Action (Aug. 1, 2017). Appellant does not contest the rejection of claims 12–16 under 35 U.S.C. § 101. See Appeal Br. 5 (identifying only the rejection of claims 1– 5, 12, 13, 17, and 18 under 35 U.S.C. § 103 in view of Olds and Anderson as Appeal 2018-006621 Application 14/478,033 5 the grounds of rejection to be reviewed on appeal). Therefore, we summarily affirm this rejection. THE 35 U.S.C. § 103 REJECTION OF CLAIMS 1–7 AND 11–20 The Examiner finds the combination of Olds and Anderson teaches or suggests every limitation recited in representative claim 1. Final Act. 4–10, 12. Of particular relevance to this Appeal, the Examiner finds Olds and Anderson both teach or suggest “generating a classified textual term that includes a word score vector comprised of a plurality of fields that each correspond to respective reference textual term having a known dictionary definition” having a score in each field that is (1) based on the similarity between the textual term and the respective reference term and (2) initiative of a probability that the respective reference term’s definition corresponds to the textual term. Final Act. 6–9 (citing Olds ¶¶ 15, 17, 20, 25, 27, 30, Figs. 2, 3; Anderson ¶¶ 89, 95, 123, 157, 167, 226, Fig. 2D); Ans. 2–9 (additionally citing Olds ¶¶ 13, 14, 16, 18, 22–24; Anderson ¶¶ 119, 125, 127, 215, 225, 238, 239, 252, 253, Figs. 3A, 3B, 10A, 10C, 11B–11D). The Examiner also finds Anderson teaches or suggests “providing the classified textual term as an input to a word model . . . configured to process the classified textual term and generate one or more representation vectors, based on the classified textual term.” Final Act. 9 (citing Anderson ¶¶ 157, 160, 167, Figs. 2B, 3A); Ans. 10–12 (additionally citing Anderson ¶¶ 158, 247–250, Fig. 11B). Olds relates to generating candidate suggestions to replace misspelled query terms by generating a score for each candidate suggestion, then ranking the suggestions. Olds ¶ 3, Abstract. As part of the overall process that suggests the highest ranked candidate queries to replace an input query, Olds generates “candidate spellings for each term in the query based on . . . Appeal 2018-006621 Application 14/478,033 6 phonetic similarity” and assigns a score, which “can correspond to the similarities identified with regard to . . . phonetics,” to “each of the candidate spellings.” Olds ¶¶ 15–16; see Olds ¶¶ 20–21 (discussing exemplary techniques for phonetically comparing a query term with candidate terms), 25 (“Scoring module 318 assigns a score to each term in list 314.”), Figs. 2 (depicting a flow chart of the steps for rendering suggested replacement queries (or paths) from an input query), 3 (depicting the various modules used to generated replacement paths for an input query). The candidate generation module may use a lexicon that “includes terms that are recognized, such as terms in a dictionary” to generate the list of candidate spellings for each query term. Olds ¶ 18. The list of candidate spelling terms may be constructed as a lattice (e.g., an array or vector) where each column of the first row represents one of the input query terms (i.e., input query terms A, B, C, and D) and each column in the remaining rows represents a candidate spelling term for the term in the first row of the respective column (i.e., candidate terms A1–A3 corresponding to input query term A, candidate terms B1–B3 corresponding to input query term B, etc.). Olds ¶ 24. An exemplary data structure or lattice for storing this information is depicted in Figure 4, reproduced below: Appeal 2018-006621 Application 14/478,033 7 Olds, Fig. 4 (depicting an exemplary of candidate term list). Olds uses the list of candidate terms as an input to a ranking module that generates and ranks candidate paths. Olds ¶¶ 17, 25–28. Each candidate path includes a candidate spelling suggestion for each term in the query. Olds ¶¶ 17, 24. Olds then determines a number of candidate paths to display and renders that number of candidate paths. Olds ¶¶ 26–29. Exemplary features that may be used to rank the candidate paths include whether “a word in the query [is] a word not found in a dictionary,” whether “a candidate word [is] found in a dictionary,” and a “measure of the phonetic similarity between a query word and a candidate path word.” Olds ¶ 30. Anderson processes a record by comparing it to existing clusters to identify a cluster with which the record can be associated or to create a new cluster with the record being the representative record for the new cluster. Anderson ¶¶ 48–49, Abstract. Anderson “identifies pairs of different identified tokens that are variants of each other . . . on the basis of some similarity score, e.g., by . . . phonetic similarity,” and the collected tokens and variant pairs may be augmented by providing dictionaries of words. Anderson ¶ 89; see Anderson ¶ 125 (“The variant profiling engine 116 Appeal 2018-006621 Application 14/478,033 8 proceeds to identify tokens that are variants of each other based on a token similarity measure. . . . Another measure is to compare words based on phonetic similarity, for example using the soundex encoding”); see also Anderson ¶ 226 (“scores between a chosen pair of field-values may be based on a similarity criterion . . . such as phonetic similarity”). “An example of a token is a word . . . in a field whose value consists of multiple words.” Anderson ¶ 90. Anderson also may “produce[] variant profiler stores 115, including a score archive identifying variant pairs and their similarity scores.” Anderson ¶ 127; see Anderson ¶ 241 (describing comparing input terms to candidate terms to generate similarity scores). If the record’s field does not result in a similarity score high enough to indicate a match with any existing cluster, the system creates a new cluster, using the record as the representative cluster. Anderson ¶¶ 249–250. The data clusters are updated, and future iterations including comparing new records to be classified against all clusters, including the newly added cluster. Anderson ¶ 250. Appellant argues neither Olds nor Anderson, alone or in combination, teaches or suggests generating a word score vector as recited. Appeal Br. 6– 12. Specifically, Appellant argues the Olds’ and Anderson’s cited disclosures fail to teach or suggest a word score vector that includes a plurality of fields corresponding to respective reference textual terms having known dictionary definitions, where each respective field has a score (1) based on the similarity between the unknown term vector and the respective reference vector and (2) indicative of the probability that the unknown textual term corresponds to the respective reference textual term’s definition. Appeal Br. 6–12. Much of Appellant’s argument with respect to Olds focuses on the fact that Olds’ paths are associated with queries, not textual terms. See, e.g., Appeal 2018-006621 Application 14/478,033 9 Appeal Br. 6–7 (“Olds’s ‘candidates or paths’ of an ‘N-best list of paths,’ at best appear to relate to ‘a query and … a correction for [the] query’” and “an alleged relationship between a ‘query’ and a ‘correction for [the] query’ does not provide[] any indication to the extent that the ‘query’ (or any other term) ‘corresponds to a respective reference textual term having a known dictionary definition,’ as does the claimed ‘word score vector.’”). Appellant does not address the Examiner’s findings that Olds’ candidate list 314 teaches scoring individual candidate terms for replacing query terms. See Final Act. 6–7 (citing Olds ¶¶ 17, 25); Olds ¶¶ 16 (“candidate spellings for each term in the query are generated based on . . . phonetic similarity”), 17 (“A candidate path includes a candidate spelling suggestion for each term in the query,” emphasis added), Figs. 3 (scoring module 318 and candidate list 314), 4 (exemplary candidate list). In particular, Olds explicitly discloses that “[s]coring module 318 assigns a score to each term in list 314” and “[t]he scores can be assigned based on . . . phonetic similarity.” Olds ¶ 25. As discussed above, Olds teaches generating candidate terms (e.g., terms appearing in a dictionary) for each term in a query and scoring each candidate term. Olds ¶¶ 16–18, 25. Olds may store these terms in a data structure with a field for each respective candidate term, which Olds describes as an n-best list of paths. Olds ¶ 24, Fig. 4. Olds also discloses exemplary techniques for phonetically comparing the input term to the candidate term, including by encoding both the input term and the candidate terms. Olds ¶¶ 20–21. Appellant also appears to conflate Olds’ candidate term scoring with Olds’ candidate path ranking. See Appeal Br. 9–10. Specifically, Appellant argues Olds uses “the scores to determine of how many paths . . . should be selected for rendering based on the calculated score.” Appeal Br. 9–10 Appeal 2018-006621 Application 14/478,033 10 (citing Olds ¶¶ 17, 28). Appellant also argues Olds’ candidate terms are not terms with known dictionary definitions because Olds’ candidate terms “are a query q and a correction t for the query q.” Appeal Br. 11 (citing Olds ¶ 27). However, Olds assigns a score to each candidate term in a list based on phonetic similarity, then ranks paths of the n-best lists. Olds ¶¶ 25–26; see Olds, Fig. 3 (depicting a module that scores candidate terms in a candidate list separately from a module that ranks candidate paths). Olds also teaches that the candidate terms may have known dictionary definitions. Olds ¶ 18; accord Final Act. 5 (citing Olds ¶ 21) (explaining that query terms are phonetically compared to terms in a lexicon). Olds then uses the candidate path rankings to determine how many paths should be rendered. Olds ¶ 28 (“Rendering module 308 uses the scores provided by ranking module 308 to decide how many suggestions to render.” (emphasis added)). Therefore, we agree with the Examiner that Olds teaches or suggests a word score vector having the properties recited in the claims because Olds teaches storing candidate terms for each query term with their associated scores that may be based on phonetic similarity. See Final Act. 6–7 (citing Olds ¶¶ 17, 25, Figs. 2 (step 206), 3 (candidate term list 314)); Advisory Act. 2 (finding that the multiple candidates in the n-best path list constitute a vector with multiple fields). We also agree with the Examiner that Olds’ candidate term scores indicate a probability that the textual term’s definition corresponds to the respective candidate term’s dictionary definition because the higher the score assigned to the respective candidate term, the more likely it is that the input term corresponds to the respective candidate term and its associated definition. Appeal 2018-006621 Application 14/478,033 11 Appellant also argues Olds and Anderson fail to teach or suggest “providing the classified textual term as an input to a word model . . . configured to process the classified textual term and generate one or more representation vectors,” as recited in the claims. Appeal Br. 12–13. Specifically, Appellant asserts “‘add[ing] entries with no variants . . . to the token-representation vector store’ as disclosed by Anderson is not the same as” providing the classified textual term to a word model, as recited. Appeal Br. 13. Appellant contends the Examiner’s explanations fail to articulate how Anderson’s cited portions teach the providing step as claimed. Appeal Br. 13. The Examiner notes that the providing step is drafted very broadly and merely requires providing the classified textual terms to a data processing system that uses the input term to generate vectors. Ans. 10. The Examiner explains that Anderson discloses looking up a query record in a cluster store and, if the query record does not exist in the cluster stores, creating a new cluster with the query record as the first cluster member. Ans. 10 (citing Anderson ¶¶ 247–250). The Examiner explains that, by adding the query term to the cluster stores, the query term is added to Anderson’s system, which is a word model that classifies terms based on representation vectors. Ans. 11–12. Appellant did not file a reply brief and, therefore, did not respond to the Examiner’s further explanation in the Answer. Appellant points to only paragraphs 3 and 37 as support for the providing step. Appeal Br. 2–4. Paragraph 3’s relevance to this step is unclear. See Spec. ¶ 3 (generally describing features present in the claims other than the providing step). Paragraph 37 simply discloses generally that a classified textual term may be stored and “may be used to train an updated model for generating Appeal 2018-006621 Application 14/478,033 12 representation vectors” or “may be used as input to another computing system.” Spec. ¶ 37. Given the Examiner’s findings and explanations, as well as the broad scope of the providing step, we agree with the Examiner that Anderson’s cited portions teach or suggest the providing step. Moreover, Olds’ cited portions also disclose providing a classified textual term as an input to a word model. As explained above, Olds stores scores for each generated candidate term and creates an N-best list of paths using the scores. Olds ¶¶ 25–26, Fig. 3 (paths 316). As also discussed above, Olds discloses receiving those classified textual terms in the form of the N-best list of paths as an input to a word model that ranks and renders some number of suggested paths as potential replacement queries. Olds ¶¶ 27–28. Exemplary factors that Olds’ word model uses to rank the paths include a measure of the phonetic similarity between the query term and a candidate term and whether the candidate term is found in a dictionary. Olds ¶ 30 (features 15 and 20). Olds generates candidate paths that each “include[] a candidate spelling suggestion for each term in the query,” Olds ¶ 17, assigns a score to each path using a formula based on various features, Olds ¶ 27. Accordingly, Olds at least suggests providing the candidate list (i.e., the claimed classified textual term) as an input to a ranking module (i.e., the recited word model) that processes the candidate list and generates and scores candidate paths (i.e., the recited representation vectors because each candidate path includes a candidate term for each query term and a score) based on the candidate list. Appellant also argues Anderson does not teach a classified textual term because Anderson does not teach the word vector score. Appeal Br. 13. Regardless of whether Anderson teaches a word score vector, we are not persuaded by Appellant’s argument that Anderson cannot teach the Appeal 2018-006621 Application 14/478,033 13 providing step because Anderson does not teach a word score vector. The Examiner alternatively finds Olds teaches the recited word score vector and, as discussed above, we agree. Therefore, Appellant does not address the Examiner’s proposed combination of Olds’ word score vector with Anderson’s teachings regarding the providing step. Non-obviousness cannot be established by attacking references individually where, as here, the ground of unpatentability is based upon the teachings of a combination of references. In re Keller, 642 F.2d 413, 426 (CCPA 1981). Rather, the test for obviousness is whether the combination of references, taken as a whole, would have suggested the patentee’s invention to a person having ordinary skill in the art. In re Merck & Co., 800 F.2d 1091, 1097 (Fed. Cir. 1986). For the above reasons, on this record, we are not persuaded the Examiner erred in rejecting independent claims 1, 12, and 17 as obvious in view of the combination of Olds and Anderson. Appellant does not separately argue the patentability of dependent claims 2–7, 11, 13–16, and 18–20 with particularity. Therefore, for the same reasons, we are not persuaded the Examiner erred in rejecting claims 2–7, 11, 13–16, and 18–20 as obvious. CONCLUSION Claims Rejected 35 U.S.C. § References/Basis Affirmed Reversed 12–16 101 Non-Statutory 12–16 1–5, 12, 13, 17, 18 103 Olds, Anderson 1–5, 12, 13, 17, 18 6, 14, 19 103 Olds, Anderson, Cook 6, 14, 19 7, 15 103 Olds, Anderson, Charlesworth 7, 15 11, 16, 20 103 Olds, Anderson, Liu 11, 16, 20 Overall Outcome 1–7, 11–20 Appeal 2018-006621 Application 14/478,033 14 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation