Ex Parte MasonDownload PDFBoard of Patent Appeals and InterferencesSep 2, 201010684313 (B.P.A.I. Sep. 2, 2010) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES ____________________ Ex parte ZACHARY J. MASON ____________________ Appeal 2009-002829 Application 10/684,313 Technology Center 2100 ____________________ Before JOHN A. JEFFERY, ST. JOHN COURTENAY III, and THU A. DANG, Administrative Patent Judges. DANG, Administrative Patent Judge. DECISION ON APPEAL1 1 The two-month time period for filing an appeal or commencing a civil action, as recited in 37 C.F.R. § 1.304, or for filing a request for rehearing, as recited in 37 C.F.R. § 41.52, begins to run from the “MAIL DATE” (paper delivery mode) or the “NOTIFICATION DATE” (electronic delivery mode) shown on the PTOL-90A cover letter attached to this decision. Appeal 2009-002829 Application 10/684,313 I. STATEMENT OF THE CASE Appellant appeals from the Examiner’s final rejection of claims 1-16 under 35 U.S.C. § 134(a) (2002). We have jurisdiction under 35 U.S.C. § 6(b). We reverse. A. INVENTION According to Appellant, the invention relates to analyzing browse activity data of users of a database access system, and more specifically, to the analysis of item selection histories of users of a database access system to predict category preferences or affinities to such users (Spec. 1, ¶ [0001]). B. ILLUSTRATIVE CLAIMS Claims 1 and 11 are exemplary and are reproduced below: 1. A computer-implemented method of analyzing browse activity data of users of a database access system, the method comprising: providing a browse tree in which items represented within a database are arranged within item categories over multiple levels of item categories; assigning individual user history scores to specific categories of the browse tree based at least in-part on an item selection history of a user, wherein the individual user history scores represent the user's predicted affinities for the corresponding item categories; assigning collective user history scores to specific categories of the browse tree based at least in-part on item selection histories of a population of users, wherein the collective user history scores represent the predicted affinities 2 Appeal 2009-002829 Application 10/684,313 of the user population for the corresponding item categories; and evaluating differences between the individual user history scores and the collective user history scores to generate a relative preference profile for the user, wherein the relative preference profile comprises relative preference scores for specific item categories, said relative preference scores reflecting a degree to which the user's predicted affinity for a category differs from the predicted affinity of the user population for that category. 11. A method of distributing credit for a selection event among the nodes of a browse tree, the method comprising: determining a total amount of credit to be distributed for the selection event in which a user selected an item within the browse tree; identifying each ancestor node of the selected item within the browse tree; dividing said total amount of credit by the number of ancestor nodes of the selected item to determine an amount of credit per ancestor to be distributed for the selection event; and assigning said amount of credit per ancestor to the ancestor nodes of the selected item within the browse tree. C. REJECTIONS The prior art relied upon by the Examiner in rejecting the claims on appeal is: Herz US 6,460,036 B2 Oct. 1, 2002 Ortega US 6,606,619 B2 Aug. 12, 2003 Ford US 6,963,867 B2 Nov. 8, 2005 (filed on Mar. 31, 2003) 3 Appeal 2009-002829 Application 10/684,313 Claims 11-14 stand rejected under 35 U.S.C. § 102(e) as anticipated by the teachings of Ford. Claims 1-10 and 15-16 stand rejected under 35 U.S.C. § 103(a) over the teachings of Ortega in view of Herz. Claims 2-3 and 16 stand rejected under 35 U.S.C. § 103(a) over the teachings of Ortega in view of Herz and Ford. II. ISSUES 1) Has the Examiner erred in finding that Ford teaches “dividing said total amount of credit by the number of ancestor nodes” (claim 11), as Appellant contends? 2) Has the Examiner erred in finding that Ortega and Herz teach or suggest “assigning individual user history scores to specific categories of the browse tree based at least in-part on an item selection history” and “evaluating differences between the individual user history scores and the collective user history scores to generate a relative preference profile for the user” (claim 1), as Appellant contends? III. FINDINGS OF FACT The following Findings of Fact (FF) are shown by a preponderance of the evidence. Ford 1. Ford discloses a category ranking process used to generate a category relevancy ranking for each competing category in an All Products search (col. 21, ll. 4-6; Fig. 8), wherein a query server determines a 4 Appeal 2009-002829 Application 10/684,313 category popularity score indicative of the significance of the query term to the category (col. 21, ll. 58-60). 2. In an embodiment, the category popularity score is determined by taking the mean value of the constituent top-level result item popularity scores (col. 22, ll. 50-52), wherein the mean value is determined by diving the sum of popularity scores for the constituent search result items in each category by the total number of constituent search result items in each category (col. 22, ll. 52-57; Table IV). Herz 3. Herz discloses automatically computing the likelihood of interest in a particular target object for a specific user (col. 18, ll. 49-51; Fig. 12), wherein the computing process includes estimating the intrinsic quality measure for a target object (col. 18, ll. 55-58), and computing the topical interest that users, like the specific user, generally have in the object (col. 19, ll. 18-22). 4. Estimating the intrinsic quality measure of the object includes selecting the attributes of the object which may include the object’s popularity among users in general (col. 18, ll. 65-67), multiplying the selected attributes by a stored weight indicative of the specific user’s preference for the object (col. 19, ll. 6-11), and computing the weighted sum of the selected attributes (col. 19, ll. 11-13). 5. Estimating topical interest at all points includes interpolating among estimates of topical interest at selected points (col. 19, ll. 34-42) by determining the similarity distance between (U,X) as an extended object that bears all the attributes of target X and all the attributes of user U and (V,Y) as an extended object that bears all the attributes of 5 Appeal 2009-002829 Application 10/684,313 target Y and all the attributes of user V, wherein the user U’s interest can be estimated even if user U is a new user or has never provided feedback, because the relevance feedback of users whose attributes are similar to U’s attributes is taken into account (col. 20, ll. 51-55). IV. ANALYSIS 35 U.S.C. § 102(e) Claims 11-14 Appellant contends that “Ford does not disclose ‘identifying each ancestor node of the selected item within the browse tree’” (App. Br. 5). In particular, Appellant contends that the Office Action’s reading of the “categories” as the “ancestor” is “improper” because “no credit from a selection event is ever allocated to the ‘categories’” and “[c]laim 1[sic] relates to ancestors of an item associated with a selection event, and not with ancestors of a ‘group of related items’” (App. Br. 7). Furthermore, Appellant contends that “Ford does not disclose ‘dividing said total amount of credit by the number of ancestor nodes of the selected item’” (App. Br. 6). That is, Appellant argues that, in Ford, “the divisor is a number of different items (not a number of ancestor nodes of an item associated with a selection event)” (id.). However, the Examiner finds that “Ford teaches receiving a search query, interpreted as a selection event” (Ans. 12), and “the top level categories are interpreted to be ancestor nodes from which items belong to” (Ans. 12-13). In particular, the Examiner finds that “the limitation [‘identifying each ancestor node of the selected item within the browse tree’] is interpreted by the examiner as identifying the groups from which a 6 Appeal 2009-002829 Application 10/684,313 selected item belongs to” (id.). The Examiner further finds that, in Ford, “the total score within a category is added up and divided by the number of items, which provides a score for a category” (Ans. 13). To determine whether Ford teaches “dividing said total amount of credit by the number of ancestor nodes,” as recited in claim 11, we begin our analysis by giving the claims their broadest reasonable interpretation. See In re Bigio, 381 F.3d 1320, 1324 (Fed. Cir. 2004). Since claim 11 does not place any limitation on what “ancestor node” means, includes, or represents, other than reciting “dividing said total amount of credit by the number of ancestor nodes of the selected item to determine an amount of credit per ancestor,” we interpret “ancestor node” as meaning any node at a higher level than a particular node. Ford teaches a search process which includes determining the significance of a query term to a category (FF 1), wherein a popularity score is determined for the constituent search result items in each category (FF 2). We find Ford to disclose an ancestor node for the particular search result items since the category comprising the particular search result items is a node at a higher level than the search result items. Though Appellant contends that “no credit from a selection event is ever allocated to the ‘categories’” and (App. Br. 7), we disagree. As the Examiner finds, “Ford teaches receiving a search query, interpreted as a selection event” (Ans. 12). Further, Ford discloses the determining a category popularity score (FF 2). Thus, contrary to Appellant’s contention, credit (category popularity score) for the selection event is allocated to the categories. 7 Appeal 2009-002829 Application 10/684,313 Furthermore, though Appellant contends that “[c]laim 1[sic] relates to ancestors of an item associated with a selection event, and not with ancestors of a ‘group of related items’” (App. Br. 7), we find an ancestor of a group of related items associated with a selection event to be an ancestor or “an item” associated with the selection event. That is, a group of items comprises “an item.” Although we agree with the Examiner in finding that Ford discloses “identifying each ancestor node of the selected item within the browse tree” (claim 11), we agree with Appellant that there is no teaching in the sections of Ford pointed out by the Examiner of “dividing said total amount of credit by the number of ancestor nodes of the selected item” (App. Br. 6). That is, in view of our finding of “categories” to be “ancestor nodes,” in Ford, the mean value is determined by dividing the sum of item scores by the total number of items (FF 2), and not by the number of categories. We agree with Appellant’s argument that, in Ford, “the divisor is a number of different items (not a number of ancestor nodes of an item associated with a selection event)” (App. Br. 6). In fact, even the Examiner admits that, in Ford, “the total score within a category is added up and divided by the number of items” (Ans. 13). As such, we find that Appellant has shown that the Examiner erred in rejecting claim 11 and claims 12-14 standing therewith over Ford. 35 U.S.C. § 103(a) Claims 1-10 and 15-16 Appellant contends that “Herz does not calculate the topical interest f by evaluating the difference between the relevance feedback for a particular 8 Appeal 2009-002829 Application 10/684,313 user for the target object and the relevance feedback from a population of users for that target object” but instead “an interpolation is done on the function f(V,Y), which takes into account the f value for all evaluated objects (V) and for all user (Y)” (App. Br. 9). That is, Appellant argues that Herz does not disclose or suggest “a differencing carried out between the relevance feedback (e.g. click and browse behavior) of an individual user and the relevance feedback of a population of users” (App. Br. 10). However, the Examiner finds that “the preference of a user is computed by utilizing specific data made by a user and data from similar user’s through feedback[] from users” (Ans. 15-16), where “the similarity distance between user preference and other user’s preferences are calculated to predict relevance of an object to a user” (Ans. 16). In particular, the Examiner finds that “[t]he attribute data for a user is interpreted to be the user history scores of the present invention and is compared to attribute data from other users to find the difference” (id.). That is, the Examiner finds that “the similarity distance between (U,X), representing a user’s interest in an object and (V, Y), representing different users’ interest in an object, can be used for topical interest” (id.). Herz discloses computing the likelihood of interest in a particular (target) object for a specific user by summing the estimated quality measure for the object with the topical interest of like users for the object (FF 3), wherein estimating the quality measure of the object includes using the object’s popularity among users and multiplying such attributes by a value indicative of the specific user’s preference (FF 4). In Herz, estimating the particular user’s interest includes interpolation using the similarly distance between 1) the attributes of the particular object with the attributes of the 9 Appeal 2009-002829 Application 10/684,313 particular user, and 2) the attributes of similar objects with the attributes of the similar users, wherein the particular user’s interest can be estimated even if the particular user is a new user or has never provided feedback because the relevance feedback of users whose attributes are similar to the particular user’s attributes is taken into account (FF 5). We agree with Appellant that that Herz does not disclose or suggest “a differencing carried out between the relevance feedback (e.g. click and browse behavior) of an individual user and the relevance feedback of a population of users” (App. Br. 10). That is, we disagree with the Examiner’s finding that, in Herz, “the similarity distance between user preference and other user’s preferences are calculated to predict relevance of an object to a user” (Ans. 16). That is, in Herz, the particular user is a new user or has never provided feedback (FF 5). Thus, contrary to the Examiner’s finding that “[t]he attribute data for a user is interpreted to be the user history scores of the present invention and is compared to attribute data from other users to find the difference” (Ans. 16), we find that there is no teaching in the sections of Herz pointed out by the Examiner of “assigning individual user history scores to specific categories of the browse tree based at least in-part on an item selection history” (Claim 1) since there is no selection history for the particular user who is a new user or a user who has not provided feedback. Thus, we find Herz also does not disclose “evaluating differences between the individual user history scores and the collective user history scores to generate a relative preference profile for the user” as required by claim 1. 10 Appeal 2009-002829 Application 10/684,313 We find that Ortega does not cure these deficiencies. As such, we find that Appellant has shown that the Examiner erred in rejecting claim 1- 10 and 15-16 standing therewith over Ortega in view of Herz. Claims 2-3 and 16 We find that Ford does not cure the deficiencies of Ortega and Herz. As such, we will also reverse the rejection of claims 2-3 and 16 over Ortega in view of Herz and Ford. V. CONCLUSION Appellant has shown that the Examiner erred in finding claims 11-14 anticipated by the teachings of Ford under 35 U.S.C. § 102(e); in holding claims 1-10 and 15-16 unpatentable over the teachings of Ortega in view of Herz under 35 U.S.C. § 103(a); and in holding claims 2-3 and 16 unpatentable over the teachings of Ortega in view of Herz and Ford under 35 U.S.C. § 103(a). VI. DECISION We have not sustained the Examiner's rejection with respect to any claim on appeal. Therefore, the Examiner’s decision rejecting claims 1-16 is reversed. REVERSED 11 Appeal 2009-002829 Application 10/684,313 peb KNOBBE MARTENS OLSON & BEAR LLP 2040 MAIN STREET FOURTEENTH FLOOR IRVINE, CA 92614 12 Copy with citationCopy as parenthetical citation