Ex Parte Bourdev et alDownload PDFPatent Trial and Appeal BoardMay 16, 201813408889 (P.T.A.B. May. 16, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 13/408,889 02/29/2012 108982 7590 05/18/2018 Wolfe-SBMC 116 W. Pacific A venue Suite 200 Spokane, WA 99201 FIRST NAMED INVENTOR Lubomir D. Bourdev UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 2087US02 7542 EXAMINER KIM,SANGH ART UNIT PAPER NUMBER 2141 NOTIFICATION DATE DELIVERY MODE 05/18/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): docket@sbmc-law.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte LUBOMIR D. BOURDEV, SHMUEL A VIDAN, and KEVIN T. DALE Appeal2017-009692 Application 13/408,889 Technology Center 2100 Before JOHN A. JEFFERY, ELENI MANTIS MERCADER, and JASON M. REPKO, Administrative Patent Judges. JEFFERY, Administrative Patent Judge. DECISION ON APPEAL Appellants 1 appeal under 35 U.S.C. § 134(a) from the Examiner's decision to reject claims 21--40. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 Appellants identify the real party in interest as Adobe Systems, Inc. App. Br. 3. Appeal2017-009692 Application 13/408,889 STATEMENT OF THE CASE Appellants' invention labels images to identify image content or elements, such as backgrounds or faces. Unlabeled images may be positioned in the same display area as labeled images such that the size and position of the unlabeled elements depends on their similarity to displayed labeled images. See generally Abstract; Spec. ,r,r 62-65; Fig. 3. Claims 21, 30, and 36 are illustrative: 21. A method for enabling collections of images to be labeled, the method comprising: receiving a gesture input selecting one or more unlabeled images displayed within a user interface; receiving input that moves said selected one or more unlabeled images into a display region within the user interface, the display region being located closer to a first labeled image than to another labeled image; and based, at least in part, on receiving the input that moves said selected one or more unlabeled images into the display region, labeling said selected one or more unlabeled images with a label associated with the first labeled image. 30. A non-transitory computer-readable storage medium storing program instructions executable on a computer to perform operations comprising: receiving a user input selecting a labeled image displayed within a user interface; receiving another user input selecting at least one unlabeled image displayed within the user interface, said another user input indicating a request to assign a label of the labeled image to said at least one unlabeled image; and responsive to receiving said another user input, assigning the label of the labeled image to said at least one unlabeled image. 36. A system, comprising: 2 Appeal2017-009692 Application 13/408,889 a memory; and one or more processors coupled to the memory, the memory storing program instructions executable by the one or more processors to perform operations comprising: receiving a gesture input selecting one or more unlabeled images displayed within a user interface; responsive to receiving the gesture input selecting the one or more unlabeled images, determining which one of multiple labeled images, displayed within the user interface, is most similar to the one or more selected, unlabeled images; and labeling said one or more unlabeled images with a label associated with the labeled image most similar to the one or more selected, unlabeled images. THE REJECTIONS The Examiner rejected claims 30 and 34 under 35 U.S.C. § I02(e) as anticipated byNagaoka (US 2010/0310135 Al; Dec. 9, 2010). Final Act. 6- 7_2 The Examiner rejected claims 36 and 40 under 35 U.S.C. § I02(b) as anticipated by Wen (US 2008/0298766 Al; Dec. 4, 2008). Final Act. 7-8. The Examiner rejected claims 21-23 and 27 under 35 U.S.C. § 103 as unpatentable over Wen and Aguera y Areas (US 2010/0278435 Al; Nov. 4, 2010) ("Areas"). Final Act. 9-11. The Examiner rejected claims 24--26, 28, and 29 under 35 U.S.C. § 103 as unpatentable over Wen, Areas, and Saund (US 2003/017923 5 Al; Sept. 25, 2003). Final Act. 11-14. 2 Throughout this opinion, we refer to (1) the Final Rejection mailed June 30, 2016 ("Final Act."); (2) the Appeal Brief filed January 10, 2017 ("App. Br."); (3) the Examiner's Answer mailed May 3, 2017 ("Ans."); and (4) the Reply Brief filed July 3, 2017 ("Reply Br."). 3 Appeal2017-009692 Application 13/408,889 The Examiner rejected claims 31-33 and 35 under 35 U.S.C. § 103 as unpatentable over Nagaoka and Saund. Final Act. 14--16. The Examiner rejected claims 37-39 under 35 U.S.C. § 103 as unpatentable over Wen and Saund. Final Act. 18-19. THE ANTICIPATION REJECTION OVER NAGAOKA The Examiner finds that Nagaoka discloses every recited element of independent claim 30 including receiving a user input selecting a labeled image, namely a folder, displayed within a user interface. Final Act. 6-7; Ans. 7-8. Appellants argue that selecting a folder in Nagaoka, which is said to be a container for objects, differs from selecting an image as claimed. App. Br. 20-23; Reply Br. 9-11. Appellants add that modifying N agaoka to enable selecting images instead of folders would impermissibly change Nagaoka's principle of operation and render it inoperative for its intended purpose. App. Br. 23-24; Reply Br. 11. ISSUE Under§ 102, has the Examiner erred in rejecting claim 30 by finding that Nagaoka's system enables receiving a user input selecting a labeled image displayed within a user interface? ANALYSIS We begin by noting, as does the Examiner, that independent claim 30 does not specify particulars of the recited labeled image apart from its labeling, and the fact that it is displayed and selected. Therefore, we agree 4 Appeal2017-009692 Application 13/408,889 with the Examiner that recited "labeled image" covers any labeled image that is displayed and selectable within a user interface. See Ans. 7-8. Given this construction, we see no error in the Examiner's mapping Nagaoka's selectable folder 402 in Figure 6 to the recited labeled image. Final Act. 6-7; Ans. 7-8. Although the folders on the left side ofNagaoka's Figure 6 are containers for objects as Appellants indicate (App. Br. 22; Reply Br. 10), these folders are nonetheless represented by labeled images displayed on the graphical user interface. On this interface, then, the user selects a labeled image of a particular folder to select that folder. To the extent that Appellants contend that the recited labeled image must be afacial image (see App. Br. 22; Reply Br. 10), such an argument is not commensurate with the scope of the claim that recites no such images. We also find unavailing Appellants' contention that modifying Nagaoka to enable selecting images instead of folders would impermissibly change Nagaoka's principle of operation and render it inoperative for its intended purpose. See App. Br. 23-24; Reply Br. 11. In short, this argument is inapposite to the Examiner's anticipation rejection that is not based on modifying Nagaoka, but rather the fact that Nagaoka's displayed folders in Figure 6 are selectable labeled images. Therefore, we are not persuaded that the Examiner erred in rejecting claim 30, and claim 34 not argued separately with particularity. THE ANTICIPATION REJECTION OVER WEN The Examiner finds that Wen discloses every recited element of independent claim 36 including determining which of multiple labeled images displayed within a user interface, namely those images associated 5 Appeal2017-009692 Application 13/408,889 with name tags 312, 322, is most similar to one or more selected unlabeled images responsive to receiving a gesture input selecting the unlabeled image, namely by right-clicking a thumbnail image 330 or 340. Final Act. 7-8; Ans. 9-10. According to the Examiner, the unlabeled image is labeled with a label associated with the labeled image most similar to the selected unlabeled image, such as "Tom" from context menu 310. Ans. 10. Appellants argue that Wen's Figure 3 embodiment merely ranks suggested names-not images----of previously tagged people in a context menu, nor are these names equivalent to the recited labeled images displayed within the user interface. App. Br. 25-28; Reply Br. 12-13. According to Appellants, modifying Wen to display images rather than names would impermissibly change Wen's operating principle and render Wen inoperative for its intended purpose. App. Br. 28; Reply Br. 13. Appellants add that the Examiner's reliance on Wen's embodiment in Figure 5 in connection with the recited labeling function is improper because, among other things, the embodiments of Wen's Figures 3 and 5 are mutually exclusive and not combinable. App. Br. 28-29. ISSUE Under§ 102, has the Examiner erred in rejecting claim 36 by finding that Wen ( 1) determines which of multiple labeled images displayed within a user interface is most similar to one or more selected unlabeled images responsive to receiving a gesture input selecting the one or more unlabeled images, and (2) labels unlabeled images with a label associated with the labeled image most similar to the selected unlabeled image? 6 Appeal2017-009692 Application 13/408,889 ANALYSIS We begin by noting that, in rejecting claim 36, the Examiner relies on the functionality of Wen's Figure 3 for all but the labeling function in the claim's last clause, but relies on the functionality of Wen's Figure 5 for disclosing that function. See Final Act. 7-8. In the Answer, however, the Examiner relies solely on the functionality of Wen's Figure 3 for teaching the recited labeling function. See Ans. 10. Despite this shift, we nonetheless see no harmful error in the Examiner's rejection given these revised findings. As shown in Wen's Figure 3, a pop-up context menu 310 or 320 appears when the user right clicks a thumbnail 330 or 340. Wen ,r 48. As the Examiner indicates, this right-click-based selection fully meets receiving a gesture input selecting one or more unlabeled images displayed within a user interface, particularly in view of the "???" label associated with these selected thumbnails in Wen's Figure 3. See Final Act. 7-8. As noted in Wen's paragraph 49, context menus 310 and 320 include suggested name tags 312 and 322 that are ranked according to the similarity between faces in the selected cluster and faces of the tagged person. Notably, these context menus 310 and 320 include not only suggested name tags 312 and 322, but also these tags' corresponding images as shown in the partial detail views of context menus 310 and 320 in Wen's Figure 3 reproduced below: 7 Appeal2017-009692 Application 13/408,889 .... \\···.·.·.·.···· .. · .. :··:: ... : .... . '· .·. Partial detail views of Wen's Fig. 3 showing context menus 310 and 320 As shown above, the images in the context menus are labeled "Tom," "Kate " "Jim" and "Jackson" respectively and therefore are "labeled ' ' ' ' ' images" as the Examiner indicates. Ans. 10. And because these images' name tags are ranked according to facial similarity to selected unlabeled images as noted above, these tags' corresponding images, namely the "labeled images," are likewise so ranked and displayed within the user interface. That is, this displayed ranking indicates which labeled image is most similar to the selected unlabeled images, namely the image having the highest ranking which, in Wen's Figure 3, is Tom's image. Because this context menu and similarity ranking appears responsive to the user right-clicking unlabeled images to select them, Wen's similarity 8 Appeal2017-009692 Application 13/408,889 determination occurs responsive to receiving this right-click-based gesture input. Notably, the user can select the label associated with the most similar image, namely the label "Tom," to use as the label for the selected unlabeled images as the Examiner indicates. Ans. 10 (citing Wen ,r 49). The functionality of Wen's Figure 3, then, anticipates claim 36. Appellants' contention that Wen's context menu merely ranks suggested names-not images----of previously-tagged people (App. Br. 25-28; Reply Br. 12-13) is unavailing, for this argument ignores the images corresponding to the context menu's facial similarity ranking noted above. We also find unavailing Appellants' contention that modifying Wen to display images rather than names would impermissibly change Wen's principle of operation and render Wen inoperative for its intended purpose. See App. Br. 28; Reply Br. 13. In short, this argument is inapposite to the Examiner's anticipation rejection that is not based on modifying Wen, but rather the fact that Wen's similarity determination in Figure 3 displays both images and names within the user interface as noted above. Lastly, Appellants' arguments regarding the Examiner's reliance on Wen's Figure 5 for teaching the recited labeling function (App. Br. 28-29) are not germane to the Examiner's reliance on Wen's Figure 3 for teaching that function on page 10 of the Answer. As noted above, we see no error in the Examiner's revised findings in this regard. Therefore, we are not persuaded that the Examiner erred in rejecting claim 36 and claim 40 not argued separately with particularity. 9 Appeal2017-009692 Application 13/408,889 THE OBVIOUSNESS REJECTION OVER WEN AND ARCAS Regarding independent claim 21, the Examiner finds that Wen's drag- and-drop functionality in Figure 5 (1) receives input that moves selected unlabeled images into a display region within a user interface, and (2) labels the selected unlabeled images with a label associated with a first labeled image. Final Act. 9--10. Although the Examiner acknowledges that Wen does not explicitly teach that the display region is located closer to the first labeled image than to another labeled image, the Examiner cites Areas for teaching this feature in concluding that the claim would have been obvious. Final Act. 10-11. Appellants argue that the Examiner's reliance on Areas is improper because, among other things, Areas lacks the recited display region and does not label images. App. Br. 11-16; Reply Br. 2---6. Appellants add that the Examiner's rationale for combining Wen and Areas is misplaced because (1) Wen's user interface in Figure 5 is well-suited to enable a user to label unlabeled images, (2) Wen's contextual re-ranking rearranges similar photos closer to the clicked photo, thus, obviating the need for Areas' teachings, (3) the cited references have different purposes, and (4) the proposed combination would render Wen less operable for its intended purpose because Wen's thumbnail area 520 and group view area 510 would grow proportionally larger and smaller, respectively, thus providing less options to label unlabeled images. App. Br. 16-19; Reply Br. 7. Appellants argue other recited limitations summarized below. 10 Appeal2017-009692 Application 13/408,889 ISSUES I. Under§ 103, has the Examiner erred by finding that Wen and Areas collectively would have taught or suggested: (1) receiving input that moves selected one or more unlabeled image into a display region within a user interface, where the display region is located closer to a first labeled image than to another labeled image, and (2) labeling the selected unlabeled images with a label associated with the first labeled image based at least partly on receiving the input as recited in claim 21? (2) labeling is performed without requiring a user to select the first labeled image using a selection tool as recited in claim 22? II. Is the Examiner's proposed combination of Wen and Areas supported by articulated reasoning with some rational underpinning to justify the Examiner's obviousness conclusion? ANALYSIS Claims 21, 23, and 27 As noted above, the Examiner relies principally on Wen's drag-and- drop functionality in Figure 5 in connection with claim 21, and, in particular, the Examiner maps the face group 512 labeled "Tom" to the recited display region into which a selected unlabeled image (face 522) is moved. Final Act. 9. As shown in Wen's Figure 5 reproduced below, an unlabeled face 522 is dragged from thumbnail area 520 and dropped to face group 512 labeled "Tom," thus, automatically tagging the face 522 to the name "Tom." Wen ,r 53. 11 Appeal2017-009692 Application 13/408,889 512 Fiig~ 5 ,---510 / I .I Wen's drag-and-drop functionality in Figure 5 Notably, Appellants do not persuasively rebut the Examiner's finding that face group 512-a display region-is located closer to photos labeled with "Tom" than any other labeled photos. Ans. 3. This finding has at least a rational basis on this record given the relative proximity of the displayed region associated with face group 512 to the photos labeled with "Tom" as compared to the proximity of other face groups to those photos as shown in Wen's Figure 5. Although Areas is technically cumulative to Wen in this regard, we nevertheless see no error in the Examiner's reliance on Areas merely to show that locating a display region closer to one image than to another is 12 Appeal2017-009692 Application 13/408,889 known in the art, particularly in light of the similarity-based proximity between images in Areas' Figures 4A to 4D. Final Act. 10-11; Ans. 3-5. As shown in Areas' Figure 4C, for example, images that are more similar to the selected center image "O" are located closer to the selected image than those that are less similar. See Areas ,r 3 8. Despite Appellants' arguments to the contrary (App. Br. 11-18; Reply Br. 2-7), we see no reason why Wen's face-group-based display regions could not be located closer to similar labeled images than to other labeled images that are less similar given Areas' teachings. Appellants' arguments regarding Areas' alleged individual shortcomings, including Areas allegedly lacking the recited display region and labelled images (App. Br. 11-16; Reply Br. 2---6) are not germane to the limited purpose for which Areas was cited, namely merely to show that locating a display region closer to one image than to another image based on their relative similarity is known in the art, and that providing such a relative location in Wen would have been at least an obvious variation. Final Act. 10-11; Ans. 3-5. Such an enhancement uses prior art elements predictably according to their established functions-an obvious improvement. See KSR Int 'l Co. v. Teleflex, Inc., 550 U.S. 398, 417 (2007). We reach this conclusion even assuming, without deciding, that the Examiner's proposed combination would somehow necessitate changing the relative sizes of the display regions as Appellants seem to suggest. App. Br. 1 7. Not only is this contention unsubstantiated on this record, making similar display regions larger than less similar display regions in a manner suggested by Areas' Figure 4C could improve the user's ability to more easily identify similar display regions in Wen, thus, at least contributing to 13 Appeal2017-009692 Application 13/408,889 improved accuracy and efficiency in labelling images associated with those regions as the Examiner indicates. See Ans. 4--5. To the extent that such relative regional size differences somehow reduce labelling options in Wen as Appellants contend (App. Br. 17}-a contention that is likewise unsubstantiated on this record-such purported drawbacks could very well be outweighed by the above-noted benefits of the Examiner's proposed enhancement to Wen. In short, such considerations amount to an engineering trade-off well within the level of ordinarily skilled artisans. Therefore, we are not persuaded that the Examiner erred in rejecting claim 21, and claims 23 and 27 not argued separately with particularity. Claim 22 We also sustain the Examiner's rejection of claim 22 reciting, in pertinent part, that labeling is performed without requiring a user to select the first labeled image using a selection tool. Despite Appellants' arguments to the contrary (App. Br. 19--20; Reply Br. 7-8), we see no error in the Examiner's finding that the user selects only an unlabeled image, namely face 522, by dragging and dropping it into the face group 512 labeled "Tom" to automatically label that image. Final Act. 11; Ans. 5---6 (citing Wen ,r 53; Fig. 5). Although there are four images shown in that face group in Wen's Figure 5, none of those images is selected as the Examiner indicates, let alone using a selection tool as claimed. Ans. 5. Even assuming, without deciding, that face group 512 is selected with a selection tool by dropping the selected face 522 in that group as Appellants seem to suggest (App. Br. 19--20; Reply Br. 7-8), that does not mean that the images in that group, 14 Appeal2017-009692 Application 13/408,889 including its four images shown in Figure 5, are selected, let alone with a selection tool as claimed. Appellants' arguments to the contrary are unavailing and not commensurate with the scope of the claim. Therefore, we are not persuaded that the Examiner erred in rejecting claim 22. THE OBVIOUSNESS REJECTION OVER WEN, ARCAS, AND SAUND We also sustain the Examiner's rejection of claim 24 over Wen, Areas, and Saund. Final Act. 12-13. Despite Appellants' arguments to the contrary (App. Br. 20; Reply Br. 8), we see no error in the Examiner's reliance on Saund's paragraphs 10 and 26 merely to show that using rectangles to select displayed images is known in the art, and that providing such a feature in the W en/Saund system would have been obvious to, among other things, improve selecting similar images for annotation (Final Act. 12- 13; Ans. 6-7}-a predictable result given the user's ability to bound particular displayed elements of interest within a rectangular selection region as shown, for example, in the region 160 in Saund's Figure 1. Such an enhancement uses prior art elements predictably according to their established functions-an obvious improvement. See KSR, 550 U.S. at 417. On this record, then, the Examiner's proposed combination of the cited references is supported by articulated reasoning with some rational underpinning to justify the Examiner's obviousness conclusion. We reach this conclusion not only for claim 24, but also claims 25, 26, 28, and 29 as well. Although Appellants nominally argue that the Examiner did not offer a motivation to combine the references for claims 25, 26, 28, and 29 (App. Br. 20), the Examiner's rejection of these claims is based on the same 15 Appeal2017-009692 Application 13/408,889 references cited in connection with claim 24, and the Examiner's articulated reason to combine those references likewise applies to those claims despite the references teaching or suggesting additional recited features. In short, these additional features use prior art elements predictably according to their established functions-an obvious improvement. See KSR, 550 U.S. at 417. Therefore, we are not persuaded that the Examiner erred in rejecting claims 24--26, 28, and 29. THE OBVIOUSNESS REJECTION OVER NAGAOKA AND SAUND We also sustain the Examiner's rejection of claim 31 over Nagaoka and Saund. Final Act. 14--15. Despite Appellants' arguments to the contrary (App. Br. 24; Reply Br. 11-12), we see no error in the Examiner's reliance on Saund's paragraphs 10 and 26 merely to show that using rectangles to select displayed images is known in the art, and that providing such a feature in Nagaoka's system would have been obvious to, among other things, improve selecting similar images for annotation (Final Act. 14-- 15; Ans. 8-9}-a predictable result given the user's ability to bound particular displayed elements of interest within a rectangular selection region as shown, for example, in the region 160 in Saund's Figure 1. Such an enhancement uses prior art elements predictably according to their established functions-an obvious improvement. See KSR, 550 U.S. at 417. On this record, then, the Examiner's proposed combination of the cited references is supported by articulated reasoning with some rational underpinning to justify the Examiner's obviousness conclusion. We reach this conclusion not only for claim 31, but also claims 32, 33, and 35 as well. Although Appellants nominally argue that the Examiner did not offer a 16 Appeal2017-009692 Application 13/408,889 motivation to combine the references for claims 32, 33, and 35 (App. Br. 24--25), the Examiner's rejection of these claims is based on the same references cited in connection with claim 31, and the Examiner's articulated reason to combine those references likewise applies to those claims despite the references teaching or suggesting additional recited features. In short, these additional features use prior art elements predictably according to their established functions-an obvious improvement. See KSR, 550 U.S. at 417. Therefore, we are not persuaded that the Examiner erred in rejecting claims 31-33 and 35. THE OBVIOUSNESS REJECTION OVER WEN AND SAUND We also sustain the Examiner's rejection of claim 37 over Wen and Saund. Final Act. 18-19. Despite Appellants' arguments to the contrary (App. Br. 30; Reply Br. 13-14), we see no error in the Examiner's reliance on Saund's paragraphs 10 and 26 merely to show that using rectangles to select displayed images is known in the art, and that providing such a feature in Nagaoka's system would have been obvious to, among other things, improve selecting similar images for annotation (Final Act. 14--15; Ans. 8- 9}-a predictable result given the user's ability to bound particular displayed elements of interest within a rectangular selection region as shown, for example, in the region 160 in Saund's Figure 1. Such an enhancement uses prior art elements predictably according to their established functions-an obvious improvement. See KSR, 550 U.S. at 417. On this record, then, the Examiner's proposed combination of the cited references is supported by articulated reasoning with some rational underpinning to justify the Examiner's obviousness conclusion. We reach 17 Appeal2017-009692 Application 13/408,889 this conclusion not only for claim 37, but also claims 38 and 39 as well. Although Appellants nominally argue that the Examiner did not offer a motivation to combine the references for claims 38 and 39 (App. Br. 30), the Examiner's rejection of these claims is based on the same references cited in connection with claim 3 7, and the Examiner's articulated reason to combine those references likewise applies to those claims despite the references teaching or suggesting additional recited features. In short, these additional features use prior art elements predictably according to their established functions-an obvious improvement. See KSR, 550 U.S. at 417. Therefore, we are not persuaded that the Examiner erred in rejecting claims 37-39. CONCLUSION The Examiner did not err in rejecting (1) claims 30, 34, 36, and 40 under§ 102, and (2) claims 21-29, 31-33, 35, and 37-39 under§ 103. DECISION We affirm the Examiner's decision to reject claims 21--40. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 18 Copy with citationCopy as parenthetical citation