Ex Parte EgertonDownload PDFPatent Trial and Appeal BoardJul 27, 201814297418 (P.T.A.B. Jul. 27, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 14/297,418 06/05/2014 Jamie Egerton 23410 7590 07/27/2018 Vista IP Law Group LLP 100 Spectrum Center Drive Suite 900 IRVINE, CA 92618 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. ACTV-002.101 3037 EXAMINER LHYMN, SARAH ART UNIT PAPER NUMBER 2613 MAIL DATE DELIVERY MODE 07/27/2018 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte JAMIE EGERTON Appeal2018-005062 Application 14/297 ,418 Technology Center 2600 Before MAHSHID D. SAADAT, ALLEN R. MAcDONALD, and JOHN P. PINKERTON, Administrative Patent Judges. SAADAT, Administrative Patent Judge. DECISION ON APPEAL Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner's Non-Final Rejection of claims 1-27. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 According to Appellant, the real party in interest is Activision Publishing, Inc. App. Br. 2. Appeal2018-005062 Application 14/297 ,418 STATEMENT OF THE CASE Appellant's invention relates to acquiring facial motion data. Spec. 1 :4--5. Exemplary claim 1 under appeal reads as follows: 1. A method of acquiring facial motion data comprising: a) displaying on an electronic display an image of a person with a first facial expression, for a first time period having a first start time and a first end time; b) outputting a first timing cue during the first time period, the first timing cue including a timing cue representing the first start time and a timing cue representing the first end time; c) displaying on the electronic display an image of the person with a second facial expression, for a second time period having a second start time and a second end time; d) outputting a second timing cue during the second time period, the second timing cue including a timing cue representing the second start time and a timing cue representing the second end time; e) displaying on the electronic display an image of the person with the first facial expression, for a third time period having a third start time and a third end time; f) outputting a third timing cue during the third time period, the third timing cue including a timing cue representing the third start time and a timing cue representing the third end time; g) displaying on the electronic display an image of the person with the second facial expression, for a fourth time period having a fourth start time and a fourth end time; h) outputting a fourth timing cue during the fourth time period, the fourth timing cue including a timing cue representing the fourth start time and a timing cue representing the fourth end time; i) acquiring a first set of facial expression data comprising facial expressions of an actor mimicking the facial 2 Appeal2018-005062 Application 14/297 ,418 expressions of the person on the display during the corresponding time periods; j) associating the acquired facial expressions of the actor with facial expression data corresponding to facial expressions of the person on the display during the corresponding time periods, based at least in part on the timing cues; k) creating a data file comprising the first set of facial expression data, including associations of the acquired facial expressions of the actor with facial expression data corresponding to facial expressions of the person on the display during the corresponding time periods, said data file being configured for use by a facial control system to drive facial expressions of a character; and wherein the method of acquiring facial motion data proceeds continuously without user intervention. REFERENCES and REJECTION Claims 1-27 stand rejected under 35 U.S.C. § 103 as unpatentable over Rosenberg (US 2008/0165195 Al; published July 10, 2008) ("Rosenberg"), Barker et al. (US 2012/0041759 Al; published Feb. 16, 2012) ("Barker"), Menache (US 2004/0179013 Al; published Sept. 16, 2004), Weise et al. (US 2013/0147788 Al; published June 13, 2013) ("Weise"), and Giefing et al. (US 5,864,363; issued Jan. 26, 1999) ("Giefing"). See Non-Final Act. 3-33. PRINCIPLES OF LAW The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of 3 Appeal2018-005062 Application 14/297 ,418 the references would have suggested to those of ordinary skill in the art. In re Keller, 642 F.2d 413,425 (CCPA 1981) (citations omitted). Indeed, the Supreme Court made clear that when considering obviousness, "the analysis need not seek out precise teachings directed to the specific subject matter of the challenged claim, for a court can take account of the inferences and creative steps that a person of ordinary skill in the art would employ." KSR Int 'l Co. v. Teleflex Inc., 550 U.S. 398, 418 (2007). ANALYSIS We have reviewed the Examiner's rejections in light of Appellant's arguments in the Appeal Brief and Reply Brief that the Examiner has erred. We are unpersuaded by Appellant's contentions and concur with the findings and conclusions reached by the Examiner as explained below. Independent Claims 1 and 22 Appellant contends the combination of cited references fails to teach or suggest a method of acquiring facial motion data in which the steps of displaying images of desired posed facial expressions, outputting respective timing cues, and acquiring images of facial expressions of an actor are performed "continuously without user intervention," as recited in independent claim 1, and similarly recited in independent claim 22. See App. Br. 8-9. More specifically, Appellant contends it would not have been obvious to modify Rosenberg in view of Barker, Menache, Weise, and Giefing to continuously, and without user intervention, perform the steps of displaying images of desired posed facial expressions, outputting respective timing cues, and acquiring images of facial expressions of an actor for two 4 Appeal2018-005062 Application 14/297 ,418 reasons. See App. Br. 10-11. First, according to Appellant, the combination of Rosenberg, Barker, Menache, and Weise is explicitly limited to a system which requires user input to proceed through a sequence of facial poses and obtaining images, which does not proceed continuously without interruption, but, instead, requires user activated triggers to proceed through the sequence. See App. Br. 11-13; see also Reply Br. 3. Second, according to Appellant, Giefing's mere disclosure of taking a single photo automatically does not teach or suggest modifying Rosenberg to continuously and automatically proceed through a process of displaying a sequence of images of desired posed facial expressions, outputting respective timing cues, and acquiring images of facial expressions of an actor. See App. Br. 13-15; see also Reply Br. 2--4. Appellant's contention is not persuasive. Consistent with the Examiner's findings, we are persuaded the cited references disclose the following: (1) Rosenberg teaches an automated method of capturing facial images of a user including prompting a user to make facial poses and capturing still images of the user depicting the facial poses; (2) Barker teaches presenting visual and audio cues to assist in synchronization of recorded audio dialog with displayed lip movement; (3) Menache teaches capturing facial images of an actor performing facial poses including repeating a default pose on a periodic basis and using the captured data to animate a character having facial features; ( 4) Weise teaches a method of animating a digital character according to facial expressions of a user including determining expression parameters for a series of frames which depict the user's face at different points in time and determining animation parameters which animate a digital character to mimic the user's facial 5 Appeal2018-005062 Application 14/297 ,418 features; and ( 5) Giefing teaches a method of automatically capturing an image of a person's face. See Non-Final Act. 4--10 (citing Rosenberg ,r,r 22, 55; Barker ,r,r 35, 43, 55; Menache ,r,r 45, 47; Weise ,r,r 6, 7, 46; Giefing 2: 17---60). In light of these teachings, we agree with the Examiner that it would have been obvious to a person of ordinary skill in the art, at the time the claimed invention was made, to combine the teachings of Rosenberg, Barker, Menache, Weise, and Giefing, so that the facial motion data acquisition steps taught in Rosenberg, Barker, Menache, and Weise are performed continuously without user intervention as taught by Giefing. See Non-Final Act. 10; see also Ans. 5. Such a combination would merely be a combination of familiar elements according to known methods yielding no more than predictable results. See KSR, 550 U.S. at 416. Appellant's argument that Giefing merely teaches automating a single step of the claimed process without user intervention (i.e., automating capturing a single image of an actor mimicking a single facial expression without user invention) (see App. Br. 13-15; see also Reply Br. 2--4) is not persuasive because Giefing teaches that the entire image acquisition method (including processing the image to identify a face and person from the captured image) is performed automatically without user intervention. See, e.g., Giefing 5:3-22. Further, Appellant fails to identify a definition, in either claims 1 and 22 or Appellant's Specification, that distinguishes "wherein the method of acquiring facial motion data proceeds continuously" without user invention, as recited in the aforementioned claims, that distinguishes the aforementioned element from the combined teachings of the cited references. See, e.g., Spec. 13:22-23 ("A facial expression data 6 Appeal2018-005062 Application 14/297 ,418 capture session may proceed continuously by, e.g., playing the entire video deck with no interruptions"). Thus, we agree with the Examiner that the combination of cited references teaches or suggests all the elements of claims 1 and 22. Therefore, we sustain the rejection of claims 1 and 22 under 35 U.S.C. § 103. Remaining Claims No separate arguments are presented for the remaining dependent claims. See App. Br. 7-15. We, therefore, sustain their rejections for the reasons stated with respect to independent claims 1 and 22. DECISION We affirm the Examiner's rejection of claims 1-27 under 35 U.S.C. § 103. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 7 Copy with citationCopy as parenthetical citation