Google LLCDownload PDFPatent Trials and Appeals BoardJul 15, 202014177461 - (D) (P.T.A.B. Jul. 15, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/177,461 02/11/2014 Alexander IZVORSKI 0715150.734-US2 4834 104433 7590 07/15/2020 Byrne Poh LLP/Google LLC 11 Broadway Ste 760 New York, NY 10004 EXAMINER NOH, JAE NAM ART UNIT PAPER NUMBER 2481 NOTIFICATION DATE DELIVERY MODE 07/15/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): info@byrnepoh.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte ALEXANDER IZVORSKI, MAKARAND DHARMAPURIKAR, and JUSTIN BISCHOFF1 ____________________ Appeal 2019-002710 Application 14/177,461 Technology Center 2400 ____________________ Before ROBERT E. NAPPI, JOHNNY A. KUMAR, and JAMES W. DEJMEK, Administrative Patent Judges. NAPPI, Administrative Patent Judge. DECISION ON APPEAL Appellant appeals under 35 U.S.C. § 134(a) from the Examiner’s rejection of claims 1 through 5 and 7 through 31. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM IN PART. 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42(a) (2017). According to Appellant, Google Inc., is the real party in interest. Appeal Br. 1. Appeal 2019-002710 Application 14/177,461 2 INVENTION Appellant’s invention relates to an approach for encoding a current video frame that includes labeling points/regions for the current video frame using graphics information for the current video frame, matching the points/regions of the current video frame with points/regions of a previous video frame using the labels for the points/regions of the current video frame and deriving motion vectors for the points/regions of the current video frame. See Abstract. Claim 1 is reproduced below. 1. A method for encoding a current video frame comprising: rendering the current video frame to generate a plurality of primitives each including a point/region represented as a two-dimensional geometric shape; generating a label for each of the primitives of the current video frame using graphics information for the current video frame, the label being used to identify the primitive of a rendered video frame, the graphic information including information corresponding to the two-dimensional geometric shape in a three-dimensional model space of the primitive; determining a previous video frame associated with the current video frame, the previous video frame being labeled using the graphic information; matching one of the plurality of primitives of the current video frame with at least one of the plurality of primitives of the previous video frame using the label for the primitive of the current video frame, the matched primitive of the current video frame having the same label as the at least one of the plurality of primitives of the previous video frame; and deriving motion vectors for the point/region corresponding to the matched primitive of the current video frame using a point/region corresponding to the at least one of the plurality of primitives of the previous frame. Appeal 2019-002710 Application 14/177,461 3 EXAMINER’S REJECTIONS2 The Examiner rejected claims 1, and 7 through 31 under 35 U.S.C. § 103 as unpatentable over Boon (US 2009/0116760 A1, published May 7, 2009), and Wong (US 2004/0101056 A1, May 27, 2004). Final Act. 4–23. The Examiner rejected claims 2 through 5 under 35 U.S.C. § 103 as unpatentable over Boon, Wong, and Parm (US 2014/0196102 A1, published July 10, 2014). Final Act. 23–27. 35 U.S.C. § 103 Rejections With respect to claims 1, 15, and 16 Appellant argues the paragraphs of Wong referenced by the Examiner do not support the Examiner’s finding that the texture map or texture information are the same as the claimed primitives or labels. Appeal Br. 9–11. Appellant argues: There is no indication in Wong that motion vector data is anything other than conventional motion vector data. Therefore, the motion vector information and the motion vector texture map are not used to label anything. Instead, the motion vector information and the motion vector texture map are used to enable 3D shader circuitry to provide motion compensation prediction instead of dedicated motion compensation prediction hardware. Appeal Br. 11. Further, Appellant argues the references fail to disclose motion vectors as claimed, stating “Wong discloses using motion vectors to generate 2 Throughout this Decision we refer to the Appeal Brief filed September 4, 2018 (“Appeal Br.”); Final Office Action mailed August 25, 2017 (“Final Act.”); and the Examiner’s Answer mailed December 13, 2018 (“Ans.”). Appeal 2019-002710 Application 14/177,461 4 texture signals and texture maps. Texture signals and texture maps are not motion vectors.” Appeal Br. 12 (emphasis omitted). Based upon these arguments Appellant concludes that: The combination of Boon and Wong fails to disclose “matching one of the plurality of primitives of the current video frame with at least one of the plurality of primitives of the previous video frame using the label for the primitive of the current video frame, the matched primitive of the current video frame having the same label as the at least one of the plurality of primitives of the previous video frame,” as recited by claim 1, and as similarly recited in independent claims 15 and 16. The combination of Boon and Wong fails to disclose “determining a motion vector for each vertex of the current video frame by comparing the position of the plurality of vertices for each vertex in the current video frame to the position of the plurality of vertices a corresponding vertex in the previous video frame,” as recited by claim 17, and as similarly recited in independent claims 22 and 27. Instead, Wong performs prediction using textures of a macroblock in a “conventional manner.” Appeal Br. 12. The Examiner finds Boon teaches the disputed limitation of generating labels (equated to Boon’s texture signals) for primitives in current and previous frames, matching the labels and deriving motion vectors. Final Act. 5–7 (citing Boon ¶¶ 13, 202). The Examiner finds that Boon does not teach rendering the current video frame as claimed and that Wong teaches rendering the current video frame to generate plural primitives as claimed. Final Act. 8–10 (citing Wong ¶¶ 8, 25 80, 81). Further, in Response to Appellant’s arguments, the Examiner states that absent a specific definition of a “label” the texture information of Wong is Appeal 2019-002710 Application 14/177,461 5 deemed to be a label. Ans. 25. Moreover, in response to Appellant’s arguments concerning the combination of the reference not teaching determining motion vectors for vertices, the Examiner states: Wong discloses in Pa0081, as an example, that the macroblock (treated as a texture and motion compensated as a texture) having four vertices and the destination vertices are processed so as to perform motion compensation prediction which is clearly understood by one of ordinary skilled in the art to be performing matching and/or comparison of the textures (macroblocks). Ans. 27. Appellant’s arguments have persuaded us of error in the Examiner’s rejection of claims 1, 15, and 16. Each of these claims recites limitations of generating labels for each primitive in a current and previous video frame, matching the primitives and deriving motion vectors. We have reviewed the teachings of Boon and Wong cited by the Examiner, and regardless of whether the texture signal is a label, we do not find that the cited teachings use labels (texture signals) for primitives to determine a match and derive a motion vector as claimed. Rather, the texture information is used as part of the prediction information (Boon ¶ 13). Although the prediction information may include motion vectors (Boon ¶ 23), we do not see that the cited paragraphs of Boon teach that the matching of textures signals between images is used to generate the motion vectors. Thus, we do not sustain the Examiner’s rejection of independent claims 1, 15, 16, and the claims which depend thereupon, claims 7 through 14. The Examiner has not shown that the additional teachings of Parm, used in the rejection of dependent claims 2 through 5 make up for the Appeal 2019-002710 Application 14/177,461 6 deficiency in the rejection of independent claim 1. Accordingly, we similarly do not sustain the Examiner’s rejection of claims 2 through 5. Independent claims 17, 22, and 27 however are of a different scope than independent claims 1, 15, and 16. Independent claims 17, 22, and 27 do not recite labeling primitives, matching primitives of a current frame with that of a previous frame, and deriving motion vectors based upon the matched primitive. As such Appellant’s arguments directed to the labeling of primitives and matching labeled primitives to generate motion vectors is not commensurate with the scope of independent claims 17, 22, and 27. These claims merely recite determining a position of a plural vertices in a current and previous video frame and determining a motion vector based upon a comparison of position. As discussed above, the Examiner equates macroblocks with the claimed positions having vertices in a frame and finds that the combination of Boon and Wong teaches generating motion vectors as recited in claims 17, 22, and 27. We concur and note Boon teaches generating a motion vector by comparing macroblock (see ¶¶ 6, 7, 144). Appellant’s have not addressed this claim interpretation by the Examiner, but rather merely assert the references “fails to disclose ‘determining a motion vector for each vertex of the current video frame by comparing the position of the plurality of vertices for each vertex in the current video frame to the position of the plurality of vertices a corresponding vertex in the previous video frame.’” Appeal Br. 12. This assertion by Appellant’s, which does not address the Examiner’s claim interpretation, has not persuaded us of error with respect to the rejection of independent claims 17, 22, and 27. Accordingly, we sustain the Examiner’s rejection of claims 17 through 31. Appeal 2019-002710 Application 14/177,461 7 CONCLUSION In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 7–31 103 Boon, Wong 17–31 1, 7–16 2–5 103 Boon, Wong, Parm 2–5 Overall Outcome 17–31 1–5, 7–16 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv) (2017). AFFIRMED IN PART Copy with citationCopy as parenthetical citation