Apple Inc.Download PDFPatent Trials and Appeals BoardMar 6, 202014732393 - (D) (P.T.A.B. Mar. 6, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/732,393 06/05/2015 Hang Yuan 082438.032650 1232 139020 7590 03/06/2020 BakerHostetler / Apple Inc. Washington Square, Suite 1100, 1050 Connecticut Ave, NW WASHINGTON, DC 20036-5304 EXAMINER MAHMUD, FARHAN ART UNIT PAPER NUMBER 2483 NOTIFICATION DATE DELIVERY MODE 03/06/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): eofficemonitor@bakerlaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ________________ Ex parte HANG YUAN, CHRIS Y. CHUNG, JAE HOON KIM, YEPING SU, JIEFU ZHAI, XIAOSONG ZHOU, and HSI-JUNG WU ________________ Appeal 2018-007934 Application 14/732,393 Technology Center 2400 ________________ Before BRADLEY W. BAUMEISTER, JASON V. MORGAN, and DAVID J. CUTITTA II, Administrative Patent Judges. MORGAN, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Introduction Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1–24. We have jurisdiction under 35 U.S.C. § 6(b). An oral hearing was requested, but waived. Waiver of Hearing (Jan. 16, 2020). We REVERSE. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party-in-interest as Apple Inc. Appeal Br. 1. Appeal 2018-007934 Application 14/732,393 2 Summary of the disclosure Appellant’s claimed subject matter relates to conversion of a still image to a video sequence coded by motion compensation prediction techniques where metadata is generated “that identifies allocations of content from the still image to the frames of the video sequence.” Abstract. Appellant’s claimed subject matter also relates to conversion of a decoded video sequence back to the still image. Id. Illustrative claims (key limitations emphasized) 1. A coding method, comprising: converting a single still image to be coded to a video sequence; coding the video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence; generating metadata identifying allocations of content from the still image to the frames of the video sequence; and transmitting coded data of the video sequence and the metadata to a channel. 17. A decoding method, comprising: decoding a coded video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence, and responsive to metadata, received in a channel with the coded video sequence, converting the decoded video sequence to a single still image. The Examiner’s rejection and cited reference The Examiner rejects claims 1–24 under 35 U.S.C. § 102(a)(1) as anticipated by Yi et al. (US 2012/0307904 A1; published Dec. 6, 2012) (“Yi”). Final Act. 3–11. Appeal 2018-007934 Application 14/732,393 3 ISSUES Whether the Examiner’s findings show that Yi discloses the claim 1 recitations of: (1) “converting a single still image to be coded to a video sequence” and (2) “generating metadata identifying allocations of content from the still image to the frames of the video sequence.” Whether the Examiner’s findings show that Yi discloses “responsive to metadata, received in a channel with the coded video sequence, converting the decoded video sequence to a single still image,” as recited in claim 17. ANALYSIS Claims 1–16 The claimed invention relates to “distributing content of a source frame”—i.e., a single still image—“to frames of [a] phantom video sequence.” Spec. ¶ 47. For example, individual pixels of the single still image can be distributed among frames “according to a predetermined distribution pattern.” Id.; see also id. 5(a). The single still image can be downsampled using filters of different strengths to generate the frames. Id. ¶ 48, Fig. 5(b). Or the single still image can be parsed into multiple tiles that are converted to frames that each correspond to one of the tiles. Id. ¶ 51, Fig. 5(c). Regardless how the single still image is converted to multiple frames for coding to a video sequence, metadata is “generated that identifies allocations of content from the still image to the frames of the video sequence.” Id. ¶ 14; see also id. ¶ 46, Fig. 4. This enables a post-processor to “reassemble content for the still image from the frames of the phantom video Appeal 2018-007934 Application 14/732,393 4 sequence.” Id. ¶ 30. That is, the single still image can be recovered from the phantom video sequence, albeit with potential errors resulting from the type of video sequence coding and decoding used. See id. ¶ 34. In rejecting the coding method of claim 1 as anticipated, the Examiner finds that Yi’s encoding of the frames of input video (i.e., a series of still images) to coded video data corresponds to “converting a single still image to be coded to a video sequence,” as recited in claim 1. See Final Act. 4 (citing Yi Fig. 1, ¶¶ 6–7, 12, 21, 31), 7. That is, the Examiner finds that “any given video sequence is composed [of] a plurality of single video frames which are separated and encoded,” where the encoding process is considered a conversion. Ans. 3. The Examiner further finds that Yi’s limiting of a coded area to a region of a frame, with the limit communicated to the decoder through metadata, corresponds to “generating metadata identifying allocations of content from the still image to the frames of the video sequence,” as recited in claim 1. See Final Act. 4 (citing Yi ¶¶ 37, 51), 7; see also Ans. 4. Appellant argues that “[t]he mere fact that video sequences are composed of a plurality of single video frames does not mean that the claimed conversion takes place” to convert “one source image – ‘a single still image’ – into a ‘video sequence.’” Reply Br. 3; see also Appeal Br. 7 (“[t]here is no conversion of a single image to a video sequence that includes multiple frames”). “Ordinarily, frames for motion picture video are generated simply by capturing image information at different times, for example, by a camera.” Reply Br. 3. Appellant also contends the Examiner erred because the metadata in Yi does not allow “a decoder to reassemble the video sequence into a single Appeal 2018-007934 Application 14/732,393 5 image, once [the video sequence] is decoded.” Id. Rather, Appellant argues, “Yi merely states that a coder distinguishes a used area of a frame . . . from an unused area . . . of the frame.” Appeal Br. 7 (citing Yi Figs. 2(a)–(b)). Appellant further argues that Yi’s encoding of the area of a frame that is used for coding merely describes coordinates of locations in a single frame of a video sequence rather than describing “which elements of video content from a first image are allocated to frames of a video sequence.” Id. at 8. Appellant’s arguments are persuasive. Yi’s coding of a series of source video frames encodes a series of still images into a video sequence rather than a single still image into a video sequence. See Yi ¶ 31. Moreover, the designation of a used portion of a frame merely identifies what portion of the frame will be visible. Id. ¶ 38. It does not identify an allocation of content from a still image. Yi’s designation does not even designate what portion of the frame may be referenced by other inter-coded frames, as Yi discloses that the unused (i.e., not visible) portion either may be referenced by other frames—in which case a lower complexity coding mode may be sufficient for the unused portion (id.)—or may not be referenced—in which case the unused portion may be filled with “gray, white, or black pixels that are cheap to code” (id. ¶ 39). For these reasons, the Examiner’s findings do not show that Yi discloses either “converting a single still image to be coded to a video sequence” or “generating metadata identifying allocations of content from the still image to the frames of the video sequence,” as recited in claim 1. Accordingly, we do not sustain the Examiner’s 35 U.S.C. § 102(a)(1) rejection of claim 1, or claims 2–16, which contain similar recitations. Appeal 2018-007934 Application 14/732,393 6 Claims 17–24 As discussed above, Appellant’s disclosed coding method converts a single image to a phantom video sequence and generates metadata that enables the single still image to be recovered from the phantom video sequence. See Spec. ¶ 34. Claim 17 recites a process of decoding a video sequence and “responsive to metadata, received in a channel with the coded video sequence, converting the decoded video sequence to a single still image.” Thus, claim 17 recites the process of recovering the single still image from the phantom video sequence using metadata. In rejecting claim 17 as anticipated, the Examiner relies on Yi’s post- processing of “recovered video prior to output . . . [to] improve the quality of the video displayed” as corresponding to the claimed conversion of a decoded video sequence to a single still image, responsive to metadata. Final Act. 10 (citing Yi Fig. 1, ¶¶ 22, 26). Appellant contends the Examiner erred because “Yi does not disclose any process in which such a conversion is performed in response to metadata received from a channel. In Yi’s system, the metadata merely identifies a corner location where a boundary between the used area 221 and unused area 222 occurs.” Appeal Br. 10. Appellant’s arguments accord with Yi, which discloses that the used area coordinates merely disclose what area of a frame will be displayed. See, e.g., Yi ¶ 38. Moreover, the Examiner does not show that the signal conditioning operations of Yi’s post-processor—e.g., “filtering, de- interlacing, scaling or other processing operations . . . that may improve the quality of the video displayed” (id. ¶ 26)—indicate that the signal Appeal 2018-007934 Application 14/732,393 7 conditional operations are performed “responsive to metadata[] received in a channel with the coded video sequence,” as recited in claim 17. The Examiner also finds that “decoded video sequences may be split into frames and reference frames may be stored individually from the video sequence.” Ans. 5. In response, Appellant argues that “[s]toring a frame of a video sequence does not teach converting a video sequence to a single image.” Reply Br. 5. We agree with Appellant that merely splitting a video sequence into frames, and even storing an individual frame from the video sequence, does not disclose “converting the decoded video sequence to a single still image” in the manner recited. In particular, obtaining individual frames merely represents “decoding a coded video sequence,” which is a separate step from the claimed conversion of “the video sequence to a single image.” Converting a video sequence to a single image recovers a single image encoded in the sequence, not just a frame (i.e., not just an image that is obtained in the process of decoding the video sequence). See, e.g., Spec. ¶ 34. Moreover, the Examiner does not identify any metadata in Yi that triggers either the splitting of a video sequence into frames or the storing of an individual frame. Thus, the Examiner’s findings do not show any such splitting or storage is “responsive to metadata[] received in a channel with the coded video sequence,” as recited in claim 17. For these reasons, we agree with Appellant that the Examiner’s findings fail to show that Yi discloses “responsive to metadata, received in a channel with the coded video sequence, converting the decoded video sequence to a single still image,” as recited in claim 17. Accordingly, we do Appeal 2018-007934 Application 14/732,393 8 not sustain the Examiner’s 35 U.S.C. § 102(a)(1) rejection of claim 17, or claims 18–24, which contain similar recitations. CONCLUSION Claims Rejected 35 U.S.C. § Reference Affirmed Reversed 1–24 102(a)(1) Yi 1–24 REVERSED Copy with citationCopy as parenthetical citation