DeepMind Technologies LimitedDownload PDFPatent Trials and Appeals BoardMar 15, 20212020006533 (P.T.A.B. Mar. 15, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/016,160 02/04/2016 Karol Gregor 45288-6750001 6482 151793 7590 03/15/2021 FISH & RICHARDSON P.C. P.O. BOX 1022 MINNEAPOLIS, MN 55440-1022 EXAMINER COUGHLAN, PETER D ART UNIT PAPER NUMBER 2121 NOTIFICATION DATE DELIVERY MODE 03/15/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): PATDOCTC@fr.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte KAROL GREGOR and IVO DANIHELKA ____________________ Appeal 2020-006533 Application 15/016,160 Technology Center 2100 ____________________ Before ERIC S. FRAHM, JUSTIN BUSCH, and JAMES W. DEJMEK, Administrative Patent Judges. FRAHM, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Appellant1 appeals under 35 U.S.C. § 134(a) from a rejection of claims 1–6, 8–12, 15–18, and 20–23, which constitute all pending claims on appeal. Claims 7, 13, 14, and 19 have been canceled. We have jurisdiction 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42 (2019). “The word ‘applicant’ when used in this title refers to the inventor or all of the joint inventors, or to the person applying for a patent as provided in §§ 1.43, 1.45, or 1.46.” 37 C.F.R. § 1.42(a). Appellant identifies DeepMind Technologies Limited as the real party in interest (Appeal Br. 1). Appeal 2020-006533 Application 15/016,160 2 under 35 U.S.C. § 6(b). An Oral Hearing was conducted on March 3, 2021.2 We reverse. DISCLOSED AND CLAIMED INVENTION Appellant’s disclosed and claimed invention pertains to neural networks, and more particularly recurrent neural networks (see Title; Abstract; Spec. ¶¶ 3, 4; claim 21). “A recurrent neural network is a neural network that receives an input sequence and generates an output sequence from the input sequence” (Spec. ¶ 4). “An example of a recurrent neural network is a Long Short-Term Memory (LSTM) neural network that includes one or more LSTM memory blocks,” each block including “one or more cells that each include an input gate, a forget gate, and an output gate that allow the cell to store previous states for the cell” (Spec. ¶ 4). One example of a LSTM neural network is provided by Alex Graves, Generating Sequences With Recurrent Neural Networks, pp. 1–43 (June 5, 2014) (see Spec. ¶ 21). According to Appellant, the disclosed invention, entitled “Recurrent Neural Networks for Data Item Generation” (Title), “relates to processing inputs through the layers of recurrent neural networks to generate outputs” (Spec. ¶ 2). Appellant recognizes that using a recurrent neural network to iteratively construct complex images (see Spec. ¶ 9 (using iterative corrections of images “improv[es] the accuracy and quality of constructed images”)) can “generate high quality images, e.g., highly realistic natural 2 Appellant was represented at oral hearing by Jiao Wang, USPTO Registration No. 78,294, and Michael Portnov. A transcript of the Oral Hearing will be made of record in due course. Appeal 2020-006533 Application 15/016,160 3 images that cannot be distinguished from real data with the naked eye” (Spec. ¶ 8) by using “backpropagation techniques” (Spec. ¶ 10). This is done by providing a recurrent neural network as set forth in independent claim 21, which is illustrative of the claimed subject matter: 21. A neural network system implemented by one or more computers, the neural network system comprising: a decoder neural network, wherein the decoder neural network is a recurrent neural network that is configured to, for each of the plurality of time steps: receive a decoder input for the time step, and process the decoder hidden state vector for the preceding time step and the decoder input to generate a decoder hidden state vector for the time step; and [A] a subsystem, wherein the subsystem is configured to generate a final output image by repeatedly updating a neural network output at each of a plurality of time steps based on an input image to generate a final neural network output and generating the final output image from the final neural network output, the updating comprising, for each of the time steps: generating the decoder input for the decoder neural network; providing the decoder input as input to the decoder neural network for the time step; [B] generating a neural network output update for the time step from the decoder hidden state vector for the time step; and [C] combining the neural network output update for the time step with a current neural network output to generate an updated neural network output. Appeal Br. 22–23 (formatting, bracketed lettering, and emphases added). Remaining independent claims 1 (recurrent neural network system), 15 (computer implemented method for processing an input image with a recurrent neural network having an encoder/decoder neural network as in Appeal 2020-006533 Application 15/016,160 4 claim 21), and 20 (computer storage medium with instructions to implement a neural network system as in claim 1) recite commensurate subject matter. REJECTIONS The Examiner made the following rejections: (1) Claims 1, 3–5, 15, 16, 20, 21, and 23 are rejected under 35 U.S.C. § 103 as being unpatentable over C. C. Tan & C. Eswaran, Reconstruction of Handwritten Digit Images Using Autoencoder Neural Networks, 465–470 (2008) (hereinafter, “Tan”), Lazar et al. (US 2013/0311412 A1; published Nov. 21, 2013) (hereinafter, “Lazar”), and Kishan Mehrotra et al., ELEMENTS OF ARTIFICIAL NEURAL NETWORKS, 1–344 (1997) (hereinafter, “Mehrotra”). Final Act. 3–26. (2) Claim 2 is rejected under 35 U.S.C. § 103 as being unpatentable over Tan, Lazar, Mehrotra, and Wang et al. (US 2014/0358265 A1; published Dec. 4, 2014) (hereinafter, “Wang”). Final Act. 26–27. (3) Claims 6, 8, 9, 12, 18, and 22 are rejected under 35 U.S.C. § 103 as being unpatentable over Tan, Lazar, Mehrotra, Wang, and Yan Liu et al., Discriminative Deep Belief Networks for Visual Data Classification, 2287– 96 (Dec. 25, 2010) (hereinafter, “Liu”). Final Act. 27–32. (4) Claims 11 and 17 are rejected under 35 U.S.C. § 103 as being unpatentable over Tan, Lazar, Mehtrotra, and Chen et al. (US 6,591,235 B1; issued July 8, 2003) (hereinafter, “Chen”). Final Act. 32–34. (5) Claim 10 is rejected under 35 U.S.C. § 103 as being unpatentable over Tan, Lazar, Mehrotra, Wang, Liu, and Chen. Final Act. 34–35. Appeal 2020-006533 Application 15/016,160 5 Appellant’s Contentions With regard to the obviousness rejection of claims 1, 3–5, 15, 16, 20, and 23, Appellant primarily argues the merits of independent claim 21 (see Appeal Br. 5–15; Reply Br. 1–4), and makes similar arguments as to the patentability of independent claims 1, 15, and 20 (which are claims having similar scope), as well as the dependent claims (see Appeal Br. 14–15; Reply Br. 3–4). As to independent claim 21, Appellant contends (Appeal Br. 5–14; Reply Br. 1–3) that the Examiner erred in rejecting claim 21 under 35 U.S.C. § 103, because Tan, and thus the combination, fails to teach or suggest a recurrent neural network including limitations A, B, and C, as recited in claim 21. Based on Appellant’s obviousness arguments, and because claims 1, 3–5, 15, 16, 20, 21, and 23 contain commensurate limitations, we select claim 21 as determinative as to the outcome for claims 1, 3–5, 15, 16, 20, 21, and 23 rejected under 35 U.S.C. § 103(a) for obviousness over the base combination of Tan, Lazar, and Mehrotra. For similar reasons, we decide the outcome of claims 2, 6, 8–12, 17, 18, and 22 (rejected over the same base combination taken with various other references), which ultimately depend from independent claims 1, 15, and/or 21 on the same basis as claim 21. Principal Issue on Appeal Based on Appellant’s arguments in the Appeal Brief (Appeal Br. 5– 15) and the Reply Brief (Reply Br. 1–4), the following principal issue is presented on appeal: Did the Examiner err in rejecting claims 1–6, 8–12, 15–18, and 20–23 as being unpatentable because Tan, and thus the base combination of Tan, Appeal 2020-006533 Application 15/016,160 6 Lazar, and Mehrotra, fails to teach or suggest a recurrent neural network system including limitations A and C, as recited in claim 21, and as similarly recited in remaining independent claims 1, 15, and 20? ANALYSIS We have reviewed the Examiner’s obviousness rejections of claims 1– 6, 8–12, 15–18, and 20–23 (Final Act. 3–26) and response to Appellant’s arguments in the Appeal Brief (Ans. 4–14), in light of Appellant’s arguments in the Appeal Brief (Appeal Br. 5–15), the Reply Brief (Reply Br. 1–4), and at oral hearing, that the Examiner has erred. Appellant’s arguments as to Tan (see Appeal Br. 5–14; Reply Br. 1–3) are persuasive. Specifically, Appellant’s arguments (see Appeal Br. 7–11; Reply Br. 1–2) that Tan fails to teach or suggest limitations A and C, including “updating a neural network output at each of a plurality of time steps” (see e.g., claim 21, limitation A), and “combining the neural network output update for the time step with a current neural network output to generate an updated neural network output” (see e.g., claim 21, limitation C), are persuasive. Although Tan’s Figure 2 shows, and Section 3.2 describes, iteratively training weights W using an “iterative training procedure” (see Tan Section 3.2, last paragraph), Tan is silent as to updating outputs iteratively at plural time steps as in limitation A, and combining the output update for each time step with the current output to generate an updated output as set forth in claim 21, and similarly set forth in claims 1, 15, and 20. Appellant has shown the Examiner erred in rejecting claim 21 as being unpatentable under 35 U.S.C. § 103. In view of the foregoing, on this Appeal 2020-006533 Application 15/016,160 7 record, we cannot sustain the rejection of representative claim 21, as well as claims 1, 3–5, 15, 16, 20, and 23 argued for similar reasons, as being obvious over the base combination of Tan, Lazar, and Mehrotra. For similar reasons, and because Appellant relies on the arguments presented as to claims 1, 15, 20, and 21 in arguing dependent claims 2, 6, 8–12, 17, 18, and 22, we also do not sustain the remaining obviousness rejections of claims 2, 6, 8–12, 17, 18, and 22 over the base combination taken with (i) Wang (as to claim 2); (ii) Wang and Liu (as to claims 6, 8, 9, 12, 18, and 22); (iii) Chen (as to claims 11 and 17); and (iv) Wang, Liu, and Chen (as to claim 10). CONCLUSION3 For all of the reasons above, we hold as follows: Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 3–5, 15, 16, 20, 21, 23 103 Tan, Lazar, Mehrotra 1, 3–5, 15, 16, 20, 21, 23 2 103 Tan, Lazar, Mehrotra, Wang 2 3 We note that Appellant’s Admitted Prior Art disclosed in paragraph 21 of the Specification, Alex Graves, Generating Sequences With Recurrent Neural Networks, arxiv.org/abs/l308.0850v5, pp. 1–43 (June 5, 2014), teaches a recurrent neural network that iteratively operates on time steps to generate cursive handwriting and mimic a particular writer’s style (see Abstract; Fig. 1; Section 1, p. 3). Although not before us on Appeal, we leave it to the Examiner to evaluate whether the independent claims are obvious under 35 U.S.C. § 103 over the combination of any and/or all of the applied references in combination with Graves. Although the Board is authorized to reject claims under 37 C.F.R. § 41.50(b), no inference should be drawn when the Board elects not to do so. See Manual of Patent Examining Procedure (MPEP) § 1213.02. Appeal 2020-006533 Application 15/016,160 8 Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 6, 8, 9, 12, 18, 22 103 Tan, Lazar, Mehrotra, Wang, Liu 6, 8, 9, 12, 18, 22 11, 17 103 Tan, Lazar, Mehrotra, Chen 11, 17 10 103 Tan, Lazar, Mehrotra 10 Overall Outcome 1–6, 8–12, 15–18, 20– 23 REVERSED Copy with citationCopy as parenthetical citation