Ex Parte El-Charif et alDownload PDFPatent Trial and Appeal BoardJul 31, 201712694191 (P.T.A.B. Jul. 31, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 12/694,191 01/26/2010 Rami El-Charif P2395.10270US01 5526 132897 7590 08/02/2017 Maschoff Brennan/ PayPal 1389 Center Drive, Ste 300 Park City, UT 84098 EXAMINER ROLAND, GRISELLE CORBO ART UNIT PAPER NUMBER 2158 NOTIFICATION DATE DELIVERY MODE 08/02/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): docket @ mabr. com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte RAMI EL-CHARIF, SANJAY PUNDLKRAO GHAT ARE, STEVEN CHEN, OLIVIER G. DUMON, MUHAMMAD FAISAL REHMAN, and GUANGLIE SONG Appeal 2017-003575 Application 12/694,191 Technology Center 2100 Before CARLA M. KRIVAK, HUNG H. BUI, and JON M. JURGOVAN, Administrative Patent Judges. KRIVAK, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from a Final Rejection of claims 1—15, which are all the claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. Appeal 2017-003575 Application 12/694,191 STATEMENT OF THE CASE Appellants’ invention is directed to “methods and systems for simulating a search to generate an optimized scoring function” for “ordering the search results when presenting the search results to an end-user of a computer-based trading or e-commerce application” (Title (capitalization altered); Spec. 11). Independent claim 1, and dependent claims 2 and 3, reproduced below, are exemplary of the subject matter on appeal. 1. A computer-implemented method comprising: with a processor-based data importing module, importing data representing a plurality of search result sets resulting from a plurality of executed search queries, each search result set including a plurality of item listings satisfying a search query from the plurality of executed search queries and a listing slot identifier identifying a position within a search results page of a particular item listing that has resulted in a transaction being concluded, the listing slot identifier determined utilizing a production scoring function specified with one or more parameters that are weighted with a first set of one or more weighting factors; with a processor-based parameter optimization module, processing the plurality of search result sets to derive a new scoring function, the new scoring function having a second set of one or more weighting factors for one or more parameters, the second set of one or more weighting factors selected to satisfy a set of constraints, the second set of one or more weighting factors being different than the first set of one or more weighting factors; and with a processor-based scoring function evaluation module, evaluating the performance of the new scoring function relative to a production scoring function. 2. The computer-implemented method of claim 1, wherein the second set of one or more weighting factors are selected to maximize an average rank shift metric for the 2 Appeal 2017-003575 Application 12/694,191 plurality of search result sets, the average rank shift metric determined by comparing the listing slot identifier as derived by the production scoring function with a listing slot identifier as derived by a new scoring function for each item listing that has resulted in a transaction being concluded for each search result set of the plurality of search result sets. 3. The computer-implemented method of claim 1, further comprising: filtering the plurality of search result sets resulting from the plurality of executed search queries to select search result sets having a predetermined number of item listings assigned to a particular category of item listings, wherein processing the plurality of search result sets to derive a new scoring function includes processing the search result sets having the predetermined number of item listings assigned to the particular category. REFERENCES and REJECTIONS (1) The Examiner rejected claims 1, 5—8, and 12—14 under 35 U.S.C. § 103(a) based upon the teachings of Watson (US 7,444,327 B2; issued Oct. 28, 2008) and Kim (US 8,938,463 Bl; issued Jan. 20, 2015). (2) The Examiner rejected claims 2—4, 9-11, and 15 under 35 U.S.C. § 103 based upon the teachings of Watson, Kim, and Dumon (US 2010/0262602 Al; published Oct. 14, 2010). ANALYSIS Claims 1, 5—8, and 12—15 With respect to independent claim 1, Appellants contend the Examiner erred in finding Kim discloses a “new scoring function having a second set of one or more weighting factors for one or more parameters, the second set of one or more weighting factors selected to satisfy a set of 3 Appeal 2017-003575 Application 12/694,191 constraints, the second set of one or more weighting factors being different than the first set of one or more weighting factors” (Br. 16 (citing Kim col. 6,11. 25—65; col. 10,1. 45—col. 11,1. 14)). Rather, Appellants contend Kim “simply teaches re-running a model,” and “does not disclose or suggest two scoring functions (a production scoring function and a new scoring function)” (Br. 16). Appellants additionally contend the Examiner’s combination of Watson and Kim does not teach the claimed first and second sets of weighting factors because Kim and Watson, individually or in combination, “only teach a first set of weighting factors” (Br. 16). Lastly, Appellants argue the Examiner’s combination of Kim and Watson does not teach evaluating the performance of a new scoring function relative to a production scoring function, as claimed; rather, combining Kim’s “teaching of using ‘implicit user feedback’ with Watson would, at best, result in a ranking system that uses implicit user feedback as one of the ranking parameters” (Br. 16—17). We do not agree. We agree with and adopt the Examiner’s findings as our own. Particularly, we agree with the Examiner that Kim discloses multiple functions and models “used [by the ranking engine] for scoring and ranking” search results, thereby teaching two different scoring functions as required by claim 1 (Ans. 3 (citing Kim col. 6,11. 5—19 and 11. 27—65); see also Kim col. 14,11. 29-37 (discussing two models with “[a]ny difference between these two models . . . likely to be due only to presentation bias”)). Further, we agree with the Examiner that Kim’s scoring functions are specified by features/parameters weighted with adjustable weights, and therefore “Kim teaches ... a first set of weighting factors (weighted features prior to adjustments [of a scoring function]) and a second set of weighting[sic] 4 Appeal 2017-003575 Application 12/694,191 factors (adjusted weighted features)” for a second scoring function that reduces display bias in search results (Ans. 2—3 (citing Kim col. 6,11. 27—65; col. 8,11. 65—67; col. 20,11. 1—17; Abstract)). Appellants’ arguments have not addressed these specific findings by the Examiner in a Reply Brief. We also are not persuaded by Appellants’ argument that the combination of Kim and Watson does not teach the claimed evaluating the performance of a new scoring function relative to a production scoring function (Br. 16—17). Rather, we agree with the Examiner that “Watson creates a new scoring function (. . . optimizer applies adjustments), and evaluates the performance of the new scoring function (. . . adjustments are tested)” (Ans. 3 (citing Watson col. 5,11. 19-42; col. 6,11. 25—60; Figs. 4A, 4B, and 5B)). As to Appellants’ argument that combining Kim’s “teaching of using ‘implicit user feedback’ with Watson would, at best, result in a ranking system that uses implicit user feedback as one of the ranking parameters” (Br. 17), we note the broadly recited limitations in claim 1 do not exclude Appellants’ proffered scenario. Particularly, claim 1 does not preclude a new scoring function specified by (1) the production scoring function’s parameters and weighting factors, together with (2) a new ranking parameter (e.g., Kim’s implicit user feedback) weighted by its attendant (new) weighting factor. Thus, for the above reasons, we sustain the Examiner’s rejection of independent claim 1, and independent claims 8 and 15 argued for substantially the same reasons (Br. 17). We also sustain the Examiner’s rejection of dependent claims 5, 6, 12, and 13, argued for their dependency on claims 1 and 8 (Br. 23). 5 Appeal 2017-003575 Application 12/694,191 Claims 2 and 9 The Examiner finds Dumon’s “shifting of weights [in a listing quality score] which takes into account mean values” teaches the average rank shift metric recited in claims 2 and 9 because Dumon’s listing quality score ranks an item listing over time by shifting a weight from a predicted score—based on similar items’ observed mean price—to an observed score associated with the item listing (Ans. 4 (citing Dumon H 35, 36, 39)). Appellants argue Dumon’s listing quality score merely measures “the likelihood that an item listing, if presented in a search results page, will result in a transaction being concluded,” and does not teach the claimed “average rank shift metric determined by comparing” listing slot identifiers from multiple scoring functions “for each item listing that has resulted in a transaction being concluded” (Br. 19 (citing Dumon H 35, 39)). Appellants also argue Dumon does not teach the claimed “one or more weighting factors . . . selected to maximize an averase rank shift metric for the plurality of search result sets” (Br. 19). We agree with Appellants that Dumon’s weight shifting in the listing quality score does not teach an average rank shift metric “determined by comparing the listing slot identifiers]” derived by two scoring functions, as required by claims 2 and 9. Rather, Dumon’s weight shifting is determined by “historical data [that] becomes available to assess the actual performance of the item listing” (see Dumon 139; Br. 19). The cited portions of Dumon also fail to teach selecting “one or more weighting factors ... to maximize [the] average rank shift metric for the plurality of search result sets,” as recited in claims 2 and 9 (Br. 19). We, therefore, do not sustain the 6 Appeal 2017-003575 Application 12/694,191 Examiner’s rejection of claims 2 and 9, as the Examiner has not identified sufficient evidence to support this rejection. Claims 3, 4, 10, and 11 The Examiner finds Dumon’s intermingling of item listings assigned to different groups to “ensure that the search results page includes item listings from groups in quantities established by some predefined ratio,” teaches search result sets having a predetermined number of item listings assigned to a particular category of item listings, as recited in claims 3 and 10 (Ans. 5 (citing Dumon || 40, 69)). The Examiner finds Dumon’s “item listing [that] has been presented in a search results page a predetermined number of times” also discloses this limitation (Final Act. 14 (citing Dumon 1 69)). Appellants argue Dumon’s grouping and intermingling item listings are not equivalent to selecting search result sets having a predetermined number of item listings, as claimed (Br. 21). We agree with Appellants. Dumon’s predefined ratio is not a predetermined number of item listings, as recited in claims 3 and 10 (see Dumon 140; Br. 21). Additionally, Dumon’s predetermined number of times an item listing has appeared in search results is not a predetermined number of item listings of a search result set, as claimed (see Dumon | 69; Br. 21). We, therefore, do not sustain the Examiner’s rejection of claims 3 and 10, as the Examiner has not identified sufficient evidence to support this rejection. Because we reverse the Examiner’s rejection of claims 3 and 10, we also reverse the rejection of claims 4 and 11, dependent therefrom. 7 Appeal 2017-003575 Application 12/694,191 Claims 7 and 14 Appellants contend Watson does not teach a new scoring function specified with parameters that differ from the production scoring function, as recited in claims 7 and 14 (Br. 23). Particularly, Appellants argue the claimed “‘parameter’ is defined as ‘a variable that must be given a specific value during the execution of a program or of a procedure within a program’,’’ and Watson’s limiting an application to consideration of less than all sources of performance data “does not infer that [such] parameters differ'” (Br. 23 (citing Watson col. 12,11. 6—32) (emphases added)). We do not agree. Rather, we agree with the Examiner that Watson’s “various sources of collected relevance performance data” are commensurate with the description of “parameters” in Appellants’ Specification (Ans. 5 (citing Watson col. 10,1. 45—col. 11,1. 14) (emphasis added); see Spec. 125; Fig. 2).1 In contrast, Appellants’ argument that the term “‘parameter’ is defined as ‘a variable that must be given a specific value during the execution of a program or of a procedure within a program’” (Br. 23) is not commensurate with the scope of the claim or with the “parameters” described in Appellants’ Specification. Additionally, we agree with the Examiner that Watson’s search result relevance optimizer “limited in application to consideration of less than all 1 Appellants’ Specification describes an “example format for a scoring function is a formula or equation having any number of weighted parameters'1'’ including “a weighted relevance score or parameter 46,” “a weighted listing quality score or parameter 48” including “demand metrics,” “item attributes,” and “similar items,” and “a weighted business rules score or parameter 50'” including “promotions” and “demotions” data (Spec. 125 (emphases added); Fig. 2 (capitalization altered)). 8 Appeal 2017-003575 Application 12/694,191 sources of performance data” teaches the claimed new scoring function “specified with parameters that differ from the production scoring function” (see Watson col. 12,11. 25—28; Ans. 6). Particularly, Watson’s optimized function has differing parameters from a previous scoring function because the optimized function’s parameters include “less than all [previously used] sources of performance data” (Ans. 6 (citing Watson col. 10,1. 45—col. 11,1. 14; col. 12,11. 6-32)). Accordingly, we sustain the Examiner’s rejection of claims 7 and 14, as Appellants’ arguments have not persuaded us of error in the Examiner’s rejection. DECISION The Examiner’s decision rejecting claims 1, 5—8, and 12—15 is affirmed. The Examiner’s decision rejecting claims 2-4 and 9-11 is reversed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED-IN-PART 9 Copy with citationCopy as parenthetical citation