KIM, Daesung et al.Download PDFPatent Trials and Appeals BoardJul 29, 202014878477 - (D) (P.T.A.B. Jul. 29, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/878,477 10/08/2015 Daesung KIM 42P46307C2 3078 45209 7590 07/29/2020 WOMBLE BOND DICKINSON (US) LLP/Mission Attn: IP Docketing P.O. Box 7037 Atlanta, GA 30357-0037 EXAMINER HANNETT, JAMES M ART UNIT PAPER NUMBER 2698 NOTIFICATION DATE DELIVERY MODE 07/29/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Database_Group@bstz.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte DAESUNG KIM and JAIHYUN AHN Appeal 2019-003223 Application 14/878,477 Technology Center 2600 Before LARRY J. HUME, JUSTIN BUSCH, and CARL L. SILVERMAN, Administrative Patent Judges. BUSCH, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 34–58, which are all the claims pending. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as Intel Corporation. Appeal Br. 3. Appeal 2019-003223 Application 14/878,477 2 STATEMENT OF THE CASE INTRODUCTION The claimed subject matter generally relates to interfaces and techniques to generate composite images from multiple consecutive images by selecting different faces from the various images. Spec. 1:5–11, 2:10–21, 2:24–3:3. The claimed invention detects a person’s face in a plurality of images, selects a face region in one of the images as the person’s base face image, provides other images of the person’s face, selects one of the other images, and generates a synthesized face image of the person by synthesizing the base face image with the selected face image. See Spec. 2:24–3:13. Claims 34, 42, 48, and 53 are independent claims. Claim 48 is representative and reproduced below: 48. A method comprising: detecting a face region of a person in a plurality of photographed images; selecting a detected face region of the person in one of the plurality of photographed images for a base face image of the person; providing a plurality of face images of the person, wherein the base image is a different image than the plurality of face images; selecting a face image of the person from among two or more face images; synthesize the base face image with the selected face image to generate a synthesized face image of the person; and storing the synthesized face image of the person. THE PENDING REJECTIONS Claims 34, 35, 37–44, 46–49, and 51–55 stand rejected under 35 U.S.C. § 102 as anticipated by Richards (US 2011/0268369 A1; Nov. 3, 2011). Final Act. 3–5. Appeal 2019-003223 Application 14/878,477 3 Claims 36, 45, 50, and 56–58 stand rejected under 35 U.S.C. § 103 as obvious in view of Richards and Official Notice. Final Act. 5–6. ANALYSIS The Examiner rejects claims 34, 35, 37–44, 46–49, and 51–55 as anticipated by Richards, and the Examiner rejects claims 36, 45, 50, and 56– 58 as obvious in view of Richards and Official Notice. Final Act. 3–6. In particular, the Examiner finds Richards’ system that generates an image from multiple received images by detecting and selecting different regions (e.g., people or faces) in the multiple images and combining region’s from the multiple images by replacing one region in a base image with the same region from a different image discloses the claimed systems and methods recited in independent claims 34, 42, 48, and 53. Final Act. 3–5 (citing Richards ¶¶ 17, 21, 43, 50). More specifically, the Examiner finds Richards discloses detecting faces using an object detection module because Richards detects regions that can be people or faces. Ans. 6 (citing Richards ¶ 21). The Examiner finds Richards discloses an image with multiple regions/faces and, therefore, that image includes the recited base face image. Ans. 6. The Examiner also finds that Richards generates “a combined image by replacing one or more regions (faces) in the base image with corresponding regions in other images perceived as better,” which the Examiner finds teaches the recited synthesis. Ans. 6 (citing Richards, Fig. 6). Appellant argues the claims as a group. See Appeal Br. 9–10. We select independent claim 48 as representative of all pending claims. See 37 C.F.R. § 41.37(c)(1)(iv). Appellant’s disclosed and claimed invention relates to generating a composite image from multiple consecutive images using at least one face Appeal 2019-003223 Application 14/878,477 4 from a different one of the consecutive images than the base image. Spec. 1:5–11, 2:10–21, 2:24–3:3. More specifically, “[t]he editing unit 130 provides an editing function of selecting a base image that will be used as a background image among the images consecutively photographed by the image pickup unit 100 and a face image that will be synthesized with the corresponding base image.” Spec. 6:25–28. To accomplish this, the invention includes “a base image selection unit 131, a face image selection unit 132 and an image synthesizing unit 133.” Spec. 7:20–21; see Spec., Fig. 2. The base image selection unit allows a user to select a base image to use “as a background image from the images consecutively photographed . . . and in creating a synthesized image.” Spec. 7:22–28; see Spec. 10:13– 12:17 (describing the interface and process for selecting a base image as the background image used to synthesize the final composite image), Figs. 5a– 5c (depicting an exemplary user interface for selecting a base image). In the base image selection screen, the interface may draw a rectangle around detected face regions. Spec. 12:12–15. “When the user selects the face region 340, the selection of a base image is finished, and the process proceeds to the face image selection step.” Spec. 12:15–17. The face image selection unit outputs face images detected by a face detection unit in the consecutive photographed images and allows a user to select a “face image among the output face images . . . and uses the face image for image synthesis.” Spec. 7:29–8:4; see Spec. 12:20–15:7 (describing the interface and process for selecting a face image used to create a synthesized image “by synthesizing the face image with a previously selected base image”), Figs. 7a–7c (depicting an exemplary user Appeal 2019-003223 Application 14/878,477 5 interface for selecting a face image); see also Spec. 15:8–15:10 (describing that a face region may be selected for each person in a picture in the face selection step). The image synthesizing unit synthesizes the selected face image with the selected base image selected “so that an image that is satisfactory to the user in terms of the background and the person illustrated in the image can be provided.” Spec. 8:11–17; see Spec. 15:15–16:10 (describing the process and exemplary interface, including various user interface options such as using a “best image” as an optional base image, for synthesizing an image from a base background image and selected face image(s)), Figs. 9a–9l (depicting an exemplary user interface for selecting a face image). In particular, once a face image (or images if there are multiple people in the picture) is selected, the system creates a synthesized image “in which the face image is synthesized with the previously selected base [background] image.” Spec. 16:6–7. When construing claim terminology during prosecution before the Office, claims are to be given their broadest reasonable interpretation consistent with the Specification, reading claim language in light of the Specification as it would be interpreted by one of ordinary skill in the art. In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004). The correct inquiry in giving a claim term its broadest reasonable interpretation in light of the specification is “an interpretation that corresponds with what and how the inventor describes his invention in the specification, i.e., an interpretation that is ‘consistent with the specification.”’ Smith, 871 F.3d at 1382–83 (quoting In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997)). We presume that claim terms have their ordinary and customary Appeal 2019-003223 Application 14/878,477 6 meaning. See In re Translogic Tech., Inc., 504 F.3d 1249, 1257 (Fed. Cir. 2007) (“The ordinary and customary meaning is the meaning that the term would have to a person of ordinary skill in the art in question.”) (internal quotation marks omitted). The claim provides no explanation or definition of the “base face image.” Accordingly, we determine the broadest reasonable interpretation by considering the plain meaning of each of the three terms (i.e., “base,” “face,” and “image”) in light of the Specification and how the terms are arranged in the phrase. The Specification describes “base images” and “face images,” but the Specification does not mention a “base face image.” See generally Spec. The Specification does, however disclose “select[ing] a face region (see reference numeral 340 of Fig. 5c that will be described later) in the base image.” Spec. 8:6–8. In light of these and the previously discussed disclosures in the Specification, we construe a “base face image” as a region in a selected base image (i.e., an image used as a starting point in generating a composite image) that includes a person’s face such that the base face image may be replaced with a corresponding face region from another image of the set of consecutive images when generating the synthesized image from the base image and the selected face image. Spec. 12:12–17; See Spec. 1:5–11, 2:10– 21, 2:24–3:3, 6:25–28, 7:20–7:28, 10:13–15:7, Figs. 2, 5a–5c, 7a–7c. Accordingly, we construe “selecting a detected face region of the person in one of the plurality of photographed images for a base face image of the person” to encompass indicating or choosing a particular face region in the base image. Appeal 2019-003223 Application 14/878,477 7 Similarly, we apply the broadest reasonable interpretation, consistent with the Specification, to construe the claimed step of “synthesiz[ing] the base face image with the selected face image to generate a synthesized image of the person.” The claim language itself does not ascribe a particular meaning to “synthesize,” but the limitation recites synthesizing a base face image (a first image or item) with the selected face image (a second image or item) to generate a synthesized image of the person (a new, combined image or item). The plain and ordinary meaning of synthesizing one item with another item merely requires combining those two items in some way or producing a new item from the two items. This is consistent with the Specification’s disclosure relating to synthesizing an image from a base image and a selected face image. Spec. 7:29–8:4, 8:11–17, 12:20–15:7, 15:15–16:10, Figs. 9a–9l. Accordingly, we construe the synthesizing step as generating a new image that replaces the face region in the base image (i.e., replaces the base face image) with the face image selected from an image other than the base image. Richards describes a similar system and process. Richards identifies a problem similar to Appellant’s identified problem, namely that “for a group of people, when the picture is taken one or more people may be blinking, frowning, looking away from the camera, and so forth.” Richards ¶ 1; see also Richards ¶¶ 14 (explaining that “if the region represents a face” determining which region is “best” may be based on “whether people are smiling, whether people have their eyes open, etc.”), 21 (“Object detection module 122 can be configured to detect a variety of different types of objects within regions of images” such as “for example, people (or faces of people).”), 43 (describing eye characteristics that may be evaluated to Appeal 2019-003223 Application 14/878,477 8 determine a best region if the region includes a person’s face). To solve this problem, Richards generates a combined image from the multiple images based on the quality of each of the corresponding respective regions identified in the multiple images. Richards ¶¶ 3, 50. To do this, Richards identifies a base image and generates a combined image from the multiple images and, “for a particular region of the base image, corresponding regions of the other images can be displayed, and the particular region replaced with a user-selected one of the corresponding regions of the other images.” Richards, Abstract; see Richards, Fig. 3 (depicting 3 consecutive images with 5 corresponding regions including identifying various crosshatched regions as the “best” version of that region for purposes of generating a combined image from the different portions of the 3 images). In one exemplary method, a best image is the image having the most regions that are the “best” regions of the various images. Richards ¶¶ 4, 14 (“A base image of the multiple images is selected, typically by selecting the image having the most regions that will not be replaced by a corresponding region from the other of the multiple regions.”). The system then generates a combined image by “automatically replacing each region of the base image with a corresponding region of another image of the multiple images.” Richards ¶¶ 4, 53–55, 57. Additionally, the system may receive a user- selected region from one of the images other than the base image and, in response, replace the particular region in the base image with the user- selected region. Richards ¶¶ 4, 58. One way a user may provide the input is by “clicking a particular button of a mouse when a cursor is displayed on top of region 338” (i.e., a detected face region in base image 306), then selecting Appeal 2019-003223 Application 14/878,477 9 one of the corresponding regions from one of the images other than the base image. Richards ¶ 61; see Richards ¶¶ 60–62, Fig. 5. Appellant asserts that Richards automatically replaces regions in one image with corresponding regions from other images to generate a combined image, but “Richards does not detect a face region of a person in the plurality of images . . . where a user can select one of the face region[s] as a base face image of a person – which is later use[d] for synthesis.” Appeal Br. 9; Reply Br. 5. Appellant acknowledges that Richards’ image 306 of a scene may include a person’s face and “would be considered a base image, but it is not a base face image of a person.” Appeal Br. 9; see Reply Br. 4–5 (arguing Richard’s base image “is not a base face image because Richards does not teach selecting a detected face region as a base face image”). We agree with Appellant that Richards discloses automatically replacing regions with corresponding regions from other images to generate a “best” combined image. However, as discussed above, Richards discloses more than just automatically replacing regions. Richards detects regions that include objects, such as people or faces. Richards ¶¶ 14, 21, 43. Richards also accepts user input to select (1) a particular region (i.e., selecting a base face image of a person) in the base image or best image to be replaced and (2) the corresponding region from other images that will replace the particular region from the base image or best image. See Richards ¶¶ 20 (“Additionally, user interface module 106 can allow a user to override the automatic selections made by image combining module”), 58 (“In addition, . . . combining module 126 and UI module 106 allow a user to provide input regarding which one of multiple corresponding regions is to be selected for inclusion in combined image 112. This user input can override the Appeal 2019-003223 Application 14/878,477 10 automatic selections . . . or alternatively can be input at different times (e.g., prior to the automatic selection being made by image combining module 126).”), 60–62 (describing an exemplary user interface for selecting region in base image to be replaced and selecting which corresponding region should replace the particular region from the base image), 70–74 (describing an exemplary process for selecting which corresponding region should replace the particular region from the base image). To the extent Appellant argues that the base face image is part of the resulting combined image or somehow combined with the selected face image, we disagree. We construed the synthesizing step above; Appellant does not point to anything in the Specification that would support such a narrow construction.2 Appellant argues the Specification shows “five consecutive photographs of a person and a person can select one of the face images as [a] base face image.” Reply Br. 4 (citing Spec. 15:15–16:10, Fig. 9h). However, Figure 9h depicts a user interface for selecting “one of the five face images in the face image candidate group,” which is the selected face image that replaces the face region in the base image. Spec. 16:1–7. Moreover, page 15 line 15 through page 16 line 10 of the Specification merely describes, at a high level, the interface and process used to receive consecutive images, select one of the consecutive images as a base image, select a face region for each detected face (as depicted in 2 In fact, to the extent Appellant asserts a construction that requires a digitally altered (as opposed to replaced) face region, such a construction would appear to be inconsistent with the Specification. In particular, criticizes a prior art “method of editing or synthesizing a picture” as having “a problem in that the facial expression is artificially created by directly modifying a photographed picture and thus is unnatural.” Spec. 2:4–6. Appeal 2019-003223 Application 14/878,477 11 Figure 9h), and create “a synthesized image, in which the face image is synthesized with the previously selected base [background] image.” Spec. 16:6–7. Appellant further asserts Richards does not disclose synthesizing the best regions in the multiple images “with a base face image to generate a synthesized face image.” Appeal Br. 9. Appellant contends that Richards’ replacement of one region with another is not the same as what is recited in the independent claims. Reply Br. 5–6 (citing Richards ¶¶ 53–56, Figs. 3, 5; Spec. 16:2–10, 7:24–28, Figs. 9i–9k). We disagree. As discussed above, in light of Appellant’s Specification, we construe “synthesiz[ing] the base face image with the selected face image to generate a synthesized face image of the person” to encompass replacing a face from the base image (i.e., the base face image) with a selected face from another image. Appellant does not persuasively demonstrate that synthesizing excludes Richards’ disclosed replacement, which is similar to how the Specification describes creating a synthesized image. The Specification portions that Appellant cites are the same disclosures we discussed above. Accordingly, we sustain the rejection of independent claim 48 under 35 U.S.C. § 102 as anticipated by Richards. We also sustain the rejection of claims 34, 35, 37–44, 46, 47, 49, and 51–55, which Appellant does not argue separately with particularity, under 35 U.S.C. § 102 as anticipated by Richards. For the same reasons, we sustain the rejection of claims 36, 45, 50, and 56–58, which Appellant does not argue separately with particularity, under 35 U.S.C. § 103 as obvious in view of Richards and Official Notice. Appeal 2019-003223 Application 14/878,477 12 DECISION SUMMARY Claims Rejected 35 U.S.C. § References Affirmed Reversed 34, 35, 37–44, 46–49, 51–55 102 Richards 34, 35, 37–44, 46–49, 51–55 36, 45, 50, 56–58 103 Richards, Official Notice 36, 45, 50, 56–58 Overall Outcome 34–58 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation