Uniloc 2017 LLCDownload PDFPatent Trials and Appeals BoardJul 22, 2021IPR2020-00479 (P.T.A.B. Jul. 22, 2021) Copy Citation Trials@uspto.gov Paper 24 571-272-7822 Date: July 22, 2021 UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ GOOGLE LLC, Petitioner, v. UNILOC 2017 LLC, Patent Owner. ____________________ IPR2020-00479 Patent 6,349,154 B1 ____________ Before JENNIFER S. BISK, DAVID C. MCKONE, and SHARON FENICK, Administrative Patent Judges. MCKONE, Administrative Patent Judge. JUDGMENT Final Written Decision Determining All Challenged Claims Unpatentable 35 U.S.C. § 318(a) IPR2020-00479 Patent 6,349,154 B1 2 I. INTRODUCTION A. Background and Summary Google LLC (“Petitioner”) filed a Petition (Paper 1, “Pet.”) requesting inter partes review of claims 1–4 of U.S. Patent No. 6,349,154 B1 (Ex. 1001, “the ’154 patent”). Pet. 2. Uniloc 2017 LLC (“Patent Owner”) filed a Preliminary Response (Paper 6, “Prelim. Resp.”). Pursuant to 35 U.S.C. § 314, we instituted this proceeding. Paper 10, (“Dec.”). Patent Owner filed a Patent Owner’s Response (Paper 12, “PO Resp.”), Petitioner filed a Reply to the Patent Owner’s Response (Paper 14, “Reply”), and Patent Owner filed a Sur-Reply to the Reply (Paper 16, “Sur- reply”). An oral argument was held on May 13, 2021 (Paper 23, “Tr.”). We have jurisdiction under 35 U.S.C. § 6. This Decision is a final written decision under 35 U.S.C. § 318(a) as to the patentability of claims 1– 4. Based on the record before us, Petitioner has proved, by a preponderance of the evidence, that claims 1–4 are unpatentable. B. Related Matters The parties indicate that the ’154 patent is at issue in Uniloc 2017 LLC v. Google LLC, No. 2:18-cv-00496 (E.D. Tex.). Pet. 1; Paper 3, 2. The United States District Court for the Eastern District of Texas (“Texas court”) transferred this case to the United States District Court for the Northern District of California (“California court”). Ex. 1017. Petitioner states that the California court found that another party held sufficient rights in the ’154 patent such that Patent Owner lacked standing to sue and, accordingly, dismissed the litigation for lack of subject matter jurisdiction. Paper 13, 1 (citing Uniloc 2017 LLC v. Google LLC, No. 4:20-cv-05345-YGR, Dkt. 210 (N.D. Cal. Dec. 22, 2020)). IPR2020-00479 Patent 6,349,154 B1 3 C. The ’154 Patent The ’154 patent describes a technique for receiving a sequence of lower-resolution pictures, estimating motion in those pictures, and creating a high-resolution still digital picture from the sequence of lower-resolution pictures. Ex. 1001, 1:6–12. Figures 2 and 3, reproduced below, are illustrative. Figure 2 is a block diagram of a system for creating high-resolution pictures. Id. at 2:15–17. IPR2020-00479 Patent 6,349,154 B1 4 Figure 3 depicts sequences of images as they are processed by prediction encoder 2 of Figure 2. Id. at 2:18–19, 3:24–53. With reference to Figures 2 and 3, image sensor 1 receives images Ai (A1, A2, etc.) and generates digitized low-resolution pictures Bi (B1, B2, etc.). Id. at 3:4–8. Pictures A1, A2, and A3 show three successive phases of a moving object. Id. at 2:23–25. B1 is an autonomously encoded picture and D1, showing the pixel values of B1, is applied to motion-compensated IPR2020-00479 Patent 6,349,154 B1 5 prediction encoder 2’s output and stored in frame memory 23. Id. at 3:22– 27. Motion estimator 24 calculates the amount of motion between successive pictures B1, B2, and B3 to predictively encode pictures B2 and B3. Id. at 3:28–32. Using the calculated motion vector, motion compensator 25 generates prediction picture Ci, which is subtracted from picture Bi to form difference output picture Di. Id. at 3:34–37. Adder 22 adds prediction image Ci and encoded difference Di and stores the sum in frame memory 23. Id. at 3:37–39. Here, picture C2 is the motion-compensated prediction picture for encoding picture B2, motion vector m12 has the value (1,½), picture C3 is the motion-compensated prediction picture for encoding picture B3, and motion vector m23 has the value (½,0). Id. at 3:40–51. Encoded pictures Di and motion vectors m are stored on storage medium 3 and transmitted through a transmission channel to motion- compensated prediction decoder 4, which decodes the original sequence of low-resolution pictures Bi and supplies them to processing circuit 5. Id. at 3:54–59, 4:1–4. The operations of processing circuit 5 are shown in Figure 4, reproduced below: IPR2020-00479 Patent 6,349,154 B1 6 Figure 4 depicts a sequence of images as they are processed by processing circuit 5 of Figure 2. Id. at 2:18–19, 4:1–38. Up-sampler 51 up-samples pictures Bi to produce pictures Ei, in the high-resolution domain. Id. at 4:6–8. Processing circuit 5 outputs high- resolution picture G1 (for the first picture, the same as input picture E1) and stores it in frame memory 54. Id. at 4:9–11. Motion-compensated picture F2 is obtained by shifting stored picture G1 by motion vector m'12, which has a IPR2020-00479 Patent 6,349,154 B1 7 value of (2,1), twice motion vector m12. Id. at 4:15–18. Adder 53 adds picture F2 and picture E2 to produce picture G2, which is stored in memory 54. Id. at 4:13–15, 4:18–27. Picture G3 is obtained similarly from pictures E3 and F3, and motion vector m'23, which has a value of (1,0), twice motion vector m23. Id. at 4:17–18, 4:28–37. These steps are repeated for further pictures in the sequence. Id. at 4:39–42. Claim 1 is the only independent claim challenged in this proceeding. The remaining challenged claims depend from claim 1. Claim 1, reproduced below, is illustrative of the invention: 1. A method of creating a high-resolution still picture, comprising the steps of: receiving a sequence of lower-resolution pictures; estimating motion in said sequence of lower-resolution pictures with sub-pixel accuracy; and creating the high-resolution still picture from said sequence of lower-resolution pictures and said estimated motion; wherein the method comprises the steps of: subjecting the sequence of pictures to motion- compensated predictive encoding, thereby generating motion vectors representing motion between successive pictures of said sequence; decoding said encoded pictures; and creating the high-resolution picture from said decoded pictures and the motion vectors generated in said encoding step. IPR2020-00479 Patent 6,349,154 B1 8 D. Evidence Petitioner relies on the references listed below. Reference Date Exhibit No. Hwang US 5,666,160 Sept. 9, 1997 1005 Yoon US 5,532,747 July 2, 1996 1007 Petitioner also relies on the Declaration of Jeffrey J. Rodriguez, Ph.D. (Ex. 1002), and the Rebuttal Declaration of Dr. Rodriguez (Ex. 1022). Patent Owner does not rely on expert testimony. E. The Instituted Ground of Unpatentability Claim(s) Challenged 35 U.S.C. § Reference(s)/Basis 1–4 1031 Hwang, Yoon II. ANALYSIS A. Claim Construction For petitions filed after November 13, 2018, we construe claims “using the same claim construction standard that would be used to construe the claim in a civil action under 35 U.S.C. 282(b), including construing the claim in accordance with the ordinary and customary meaning of such claim as understood by one of ordinary skill in the art and the prosecution history 1 The Leahy-Smith America Invents Act, Pub. L. No. 112-29, 125 Stat. 284 (2011) (“AIA”), amended 35 U.S.C. § 103. Changes to § 103 apply to applications filed on or after March 16, 2013. Because the ’154 patent has an effective filing date before March 16, 2013, we refer to the pre-AIA version of § 103. IPR2020-00479 Patent 6,349,154 B1 9 pertaining to the patent.” 37 C.F.R. § 42.100(b) (2019); see also Phillips v. AWH Corp., 415 F.3d 1303 (Fed. Cir. 2005) (en banc). The Texas court previously construed several claim terms. Ex. 1013, 9–39. Neither party challenged those constructions at the institution stage. Pet. 5–6; Prelim. Resp. 22–23. Accordingly, we adopted those constructions for purposes of the Institution Decision. Dec. 20–21. Those constructions are summarized in the table below: Claim Term Texas District Court Construction “resolution” (claims 1–4) number of pixels “estimating motion in said sequence of lower resolution pictures with sub-pixel accuracy” (claims 1–4) estimating motion, in said sequence of lower resolution pictures, with accuracy capable of representing motion that is less than a single pixel “motion-compensated predictive encoding” (claims 1–4) predictive encoding based on motion between a current picture and a previously encoded picture “generating motion vectors representing motion between successive pictures of said sequence” (claims 1–4) generating motion vectors representing motion of pixels from positions in one picture to positions in the next picture in the sequence “motion vector(s)” (claims 1–4) vector(s) representing motion of a block of pixels between two pictures “sequence of I and P-pictures” (claim 3) sequence of I and P-pictures as defined by MPEG standards at the time of the invention IPR2020-00479 Patent 6,349,154 B1 10 Claim Term Texas District Court Construction “recursively adding, in the high- resolution domain, a current decoded picture to a previously created picture, said previously created picture being subjected to motion compensation in accordance with the motion vector which is associated with the current decoded picture” (claim 4) recursively adding, in the high- resolution domain, the pixel values of a current decoded picture to the pixel values of a previously created picture, said previously created picture being subjected to motion-compensation in accordance with the motion vector which is associated with the current decoded picture In the Response, Patent Owner argues that, “for all claim terms,” we should “adopt the ordinary and customary meaning of the claim terms as understood by a [person of ordinary skill in the art].” PO Resp. 28. As to the Texas court’s constructions, Patent Owner does not adopt or advocate for them, but argues that applying them “will result in the prior art not teaching the elements of the ’154 Patent.” Id. at 28–29. In any case, Patent Owner does not specifically challenge any of the Texas court’s constructions or argue that they are incorrect. Nor does Patent Owner propose any express construction for any claim term. Petitioner argues that it showed, in the Petition, unpatentability under the Texas court’s constructions but, if we were to adopt the plain and ordinary meaning of the claims, there would be no dispute that the challenged claims are unpatentable. Reply 2–3. However, Petitioner does not specifically challenge any of the Texas court’s constructions. Based on the complete record, we adopt the Texas court’s constructions, as reflected in the table above. We do not find it necessary to provide express claim constructions for any other terms. See Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co., 868 F.3d 1013, 1017 (Fed. IPR2020-00479 Patent 6,349,154 B1 11 Cir. 2017) (noting that “we need only construe terms ‘that are in controversy, and only to the extent necessary to resolve the controversy’”) (quoting Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803 (Fed. Cir. 1999)). B. Obviousness of Claims 1–4 over Hwang and Yoon Petitioner contends that claims 1–4 would have been obvious over Hwang and Yoon. Pet. 17–76. For the reasons given below, Petitioner has shown obviousness by a preponderance of the evidence. A claim is unpatentable under 35 U.S.C. § 103 if the differences between the claimed subject matter and the prior art are “such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains.” We resolve the question of obviousness on the basis of underlying factual determinations, including (1) the scope and content of the prior art; (2) any differences between the claimed subject matter and the prior art; (3) the level of skill in the art; and (4) objective evidence of nonobviousness, i.e., secondary considerations.2 See Graham v. John Deere Co., 383 U.S. 1, 17–18 (1966). 1. Level of Skill in the Art Petitioner, relying on the testimony of Dr. Rodriguez, contends that a person of ordinary skill in the art “would have had a Bachelor’s degree in Electrical Engineering, Computer Science, or the equivalent thereof, and two 2 The record does not include allegations or evidence of objective indicia of nonobviousness. IPR2020-00479 Patent 6,349,154 B1 12 or more years of experience with data compression systems and algorithms, including video coding.” Pet. 3–4 (citing Ex. 1002 ¶¶ 17–19). According to Petitioner, “[m]ore education can supplement practical experience and vice versa.” Id. at 4. Patent Owner argues that Petitioner’s proposed level of skill is “improper” for several reasons, but “does not offer a competing definition for a person of ordinary skill in the art.” PO Resp. 20–27. However, Patent Owner argues that “Google’s grounds and the factual support in the Dr. Rodriguez’s Declaration are all based on a level of ordinary skill in the art that is higher and more detailed than is warranted based on the record.” Id. at 25. According to Patent Owner, “because the Petition applies a level of skill in the art that is not supported by the record, [Petitioner] has failed to meet its burden as to the challenged claims.” Id. at 27. Patent Owner argues that Dr. Rodriguez does not indicate that he was informed of the legal standard for determining the level of skill in the art and failed to conduct a proper review. Id. at 20–21 (quoting Daiichi Sankyo Co. v. Apotex, Inc., 501 F.3d 1254, 1256 (Fed. Cir. 2007) (listing factors that may be considered in determining the level of ordinary skill in the art)). Petitioner argues that Dr. Rodriguez did consider the relevant factors. Reply 23–24 (citing Ex. 1002 ¶¶ 13–16, 18). We agree with Petitioner. Dr. Rodriguez’s testimony shows that he considered factors relevant to determining the level of skill in the art. Ex. 1002 ¶¶ 13–15, 18. Patent Owner does not identify any factors that Dr. Rodriguez misapplied or how it might have led to an incorrect level of skill. Patent Owner next argues that Petitioner’s proposed level of skill, particularly as to the terms “more education” and “practical experience,” is too imprecise. PO Resp. 21–22. Patent Owner complains that it “cannot be IPR2020-00479 Patent 6,349,154 B1 13 applied with any precision or predictability.” Id. at 22. Petitioner argues that Dr. Rodriguez did offer a specific level of skill, and that the language Patent Owner finds objectionable “simply recognized the realities of someone’s experience.” Reply 24 (citing Ex. 1002 ¶ 18). We agree with Petitioner and do not find the proposed level of skill to lack precision. In any case, Patent Owner does not point to any instances where the alleged imprecision would have any impact on the outcome of this proceeding. Patent Owner next argues that Dr. Rodriguez is not a person of ordinary skill in the art, and, in particular, that he lacks two years of experience with data compression systems and algorithms, which he would need to qualify under the standard for one of ordinary skill that he proposes. PO Resp. 22–23. According to Patent Owner, Dr. Rodriguez has engaged in consulting activities in many different fields outside of data compression, evidencing that he is “at most, . . . a generalist that has opined on a whole host of technologies that have run the gamut of the electrical arts,” rather than one with “any true expertise in data compression.” Id. at 23–25. Petitioner responds that Dr. Rodriguez is the Director of the Signal and Image Laboratory at the University of Arizona’s Department of Electrical and Computer Engineering, teaches classes in digital image processing that include material related to image and video compression, and does research directed to image and video processing, including research dealing with computing high-resolution images from low-resolution images. Reply 24– 26 (citing Ex. 1002 ¶¶ 3, 5–11). We agree with Petitioner that Dr. Rodriguez is at least a person of ordinary skill in the art, both by virtue of his research and teaching experience dealing with data compression and IPR2020-00479 Patent 6,349,154 B1 14 video coding and by virtue of advanced education.3 Moreover, Patent Owner’s arguments do not point to any deficiency in Petitioner’s proposed level of skill and, instead, appear to be merely an attack on Dr. Rodriguez’s credentials in an effort to have us disregard his testimony in its entirety. See PO Resp. 25 (“Google relies heavily on Dr. Rodriguez to demonstrate key gaps in the prior art and motivation to combine. His opinions should be given no weight for the reasons noted above.”), 27 (“For these reasons, Dr. Rodriguez’s Declaration should be given no weight.”). In the Institution Decision, we determined, on the preliminary record, that Petitioner’s proposed level of skill is consistent with the technology described in the Specification and the cited prior art. Dec. 22–23. Patent Owner has proposed no alternative and has offered no evidence of its own to contradict Petitioner’s proposal. On the complete record, we find, consistent with Petitioner’s proposal, that a person of ordinary skill in the art would have had a Bachelor’s degree in Electrical Engineering, Computer Science, or the equivalent thereof, and two or more years of experience with data 3 Even if he were not a person of ordinary skill in the art, we would still consider his testimony. To testify as an expert under Federal Rule of Evidence 702, a person need not be a person of ordinary skill in the art, but rather must be “qualified in the pertinent art.” Sundance, Inc. v. DeMonte Fabricating Ltd., 550 F.3d 1356, 1363–64 (Fed. Cir. 2008); accord SEB S.A. v. Montgomery Ward & Co., 594 F.3d 1360, 1372–73 (Fed. Cir. 2010) (upholding a district court’s ruling to allow an expert to provide testimony at trial because the expert “had sufficient relevant technical expertise” and the expert’s “knowledge, skill, experience, training [and] education . . . [wa]s likely to assist the trier of fact to understand the evidence”); Mytee Prods., Inc. v. Harris Research, Inc., 439 F. App’x 882, 886–87 (Fed. Cir. 2011) (nonprecedential) (upholding admission of the testimony of an expert who “had experience relevant to the field of the invention,” despite admission that he was not a person of ordinary skill in the art). IPR2020-00479 Patent 6,349,154 B1 15 compression systems and algorithms, including video coding. Ex. 1002 ¶¶ 17–19. We also find that Dr. Rodriguez’s level of skill is at least that of a person of ordinary skill in the art. 2. Scope and Content of the Prior Art a) Overview of Hwang Hwang describes a high resolution digital zooming system for enhancing the resolution of a camera image by detecting a camera motion component from pixel images across more than one image frame. Ex. 1005, 1:6–11, 2:39–45. Figure 6, reproduced below, illustrates an example: Figure 6 is a block diagram of a digital zooming system. Id. at 3:7–11. Motion detector 61 receives an image signal as an input, detects horizontal and vertical components of pixel motion, and outputs pixel motion signal V(n) and subpixel motion signal v(n). Id. at 3:29–32. Area detector 62 “adjusts the image signal P(x;n) according to the horizontal and vertical pixel motions V(n) received from the motion detector 61, and detects a zoom area in accordance with a zoom magnification factor m.” Id. at 3:33–37. Zoomed image compensator 63 “spatially magnifies a IPR2020-00479 Patent 6,349,154 B1 16 sampling interval of the detected image” and adjusts the zoomed image according to the subpixel motion signal v(n). Id. at 3:38–42. Image interpolator 64 performs a convolution between the known pixel information and a rectangle function, and extends the image resolution range to subpixels adjacent to known pixels. Id. at 3:43–48. Image synthesizer 65 receives the output of image interpolator 64 and a previous image signal, synthesizes them, and, using an infinite impulse response (“IIR”) filter with a weighted time value, outputs a resultant image signal. Id. at 3:49–54. b) Overview of Yoon Yoon describes a method of decoding an encoded image signal supplied in the form of a series of encoded image frames. Ex. 1007, Abstr. In its obviousness allegations, Petitioner cites to disclosure in Yoon’s Description of the Prior Art. See, e.g., Pet. 35–37 (citing Prior Art Figs. 1, 2, and related description). This background describes a hybrid coding technique, which combines temporal and spatial compression techniques. Ex. 1007, 1:23–27. An encoder is described with respect to Figure 1 and a decoder is described with respect to Figure 2. IPR2020-00479 Patent 6,349,154 B1 17 Figure 1 is reproduced below: Figure 1 is a block diagram of hybrid coder 100 (also referred to as encoder 100). Id. at 1:28–29, 3:3. A block of pixels from a digitized input frame is fed to subtracter 101, where it is digitally combined with a predicted block of pixels from the previous frame. Id. at 1:30–34. After additional processing in transformer 102 and quantizer 103, the frame is fed to entropy coder 104. Id. at 1:34–40. “At the encoder 100, each block, and thereby the entire frame, is reconstructed by inversely quantizing and transforming the quantized coefficients and adding them to the pixels of the corresponding predicted block at a summer 107,” which stores the frame in memory 108 for use in constructing the next frame. Id. at 1:41–47. Hybrid coder 100 also performs motion compensation, which “is a process of predicting each block of pixels in a present frame from the pixels in its previous frame based on an estimation of a translatory motion of the IPR2020-00479 Patent 6,349,154 B1 18 block between the present and the previous frames.” Id. at 1:56–62, Fig. 1 (motion compensated predictor 110). The estimated motion is based on motion vectors that consist of horizontal and vertical components indicating the offset of the predicted block’s location in the previous frame compared to the present block’s location. Id. at 1:62–67, Fig. 1 (motion estimator 109).4 A motion vector can be a half-pixel resolution motion vector derived from a full-pixel resolution motion vector. Id. at 2:1–10. “The half-pixel resolution motion vectors, together with the entropy coded data, are forwarded via the transmission channel to the associated decoder 200 for use in the conducting motion compensated prediction therein.” Id. at 2:25–28. 4 The parties dispute whether Yoon shows motion estimator 109 generating motion vectors representing motion between successive pictures of a sequence. We address that dispute below. IPR2020-00479 Patent 6,349,154 B1 19 Decoder 200 is shown in Figure 2, reproduced below: Figure 2 is a block diagram of decoder 200 that matches hybrid coder 100 of Figure 1. Id. at 4:4–5, 1:47–50. Decoder 200 performs similar motion compensated prediction using the half-pixel resolution motion vectors received from encoder 100. Id. at 2:40–61. 3. Claims 1–4, Differences Between the Claimed Subject Matter and Hwang and Yoon, and Reasons to Modify or Combine Petitioner cites Yoon for the aspects of claim 1 related to deriving motion vectors from subsequent lower-resolution pictures, encoding the lower-resolution pictures and motion vectors, and decoding the lower - resolution pictures and motion vectors. Petitioner cites Hwang for the aspects of claim 1 related to receiving decoded lower-resolution pictures and motion vectors and creating a high-resolution picture from the lower- IPR2020-00479 Patent 6,349,154 B1 20 resolution pictures and motion vectors. Pet. 38–40. Petitioner’s Demonstrative A from the Petition, reproduced below, illustrates how Petitioner proposes combining these aspects: Demonstrative A is a picture in which Petitioner has combined a portion of Hwang’s Figure 6 with blocks from Yoon’s Figures 1 and 2 and provided annotations. Id. at 41. a) Hwang’s teachings As to the limitations of claim 1, Petitioner argues that Hwang’s description of receiving input images teaches “receiving a sequence of lower-resolution pictures.” Pet. 20–22 (citing Ex. 1005, 1:6–11, 2:39–44, 3:7–11, 3:29–32, 4:59–5:4, 6:33–51). Petitioner further contends that Hwang’s description of motion detector 61 producing half-pixel resolution motion vectors v(n) teaches “estimating motion in said sequence of lower- resolution pictures with sub-pixel accuracy.” Pet. 23–25 (citing Ex. 1005, 2:39–45, 3:5–9, 3:29–32, 3:43–48, 4:57–5:4, 5:10–14, 6:32–41). Petitioner further argues that Hwang’s description of processing input image data to produce zoomed output data, specifically the operations in blocks 62–65 of Figure 6, teaches “creating the high-resolution still picture from said sequence of lower-resolution pictures and said estimated motion,” as recited IPR2020-00479 Patent 6,349,154 B1 21 in claim 1. Pet. 25–34. Petitioner supports its allegations with the testimony of Dr. Rodriguez. Ex. 1002 ¶¶ 52–72. Patent Owner argues that Hwang does not teach lower-resolution pictures and high-resolution pictures, at least as the Texas court has construed “resolution” (“number of pixels”). PO Resp. 30–34. Essentially, Patent Owner argues that Petitioner has not shown that the pictures output from Hwang’s digital zooming system have more pixels than the pictures input into the system. Id. at 7. Hwang “relates to a high resolution digital zooming system and method which enhances the resolution of a camera image.” Ex. 1005, 1:9– 11. In its background, Hwang discusses various “methods to form high resolution magnified images from small-sized low resolution images.” Id. at 1:19–20. According to Hwang, “[w]hen magnifying an image, the number of pixels for displaying the image increases.” Id. at 1:15–17. Nevertheless, Patent Owner argues, Hwang does not define “high resolution” or describe that term in relation to other images. PO Resp. 8. In particular, Patent Owner argues, Hwang does not define the pictures input to the preferred embodiment digital zooming system as low resolution. Id. Patent Owner argues that, rather than describing a system that magnifies an entire image (thus increasing the number of pixels), Hwang describes a system that only zooms a portion of an input image, resulting in an output image that has the same number of pixels as the input image, or less. Id. at 8–15. According to Patent Owner, Hwang uses the term “high resolution” to “convey how the resolution of the portion of the zoomed image (but not the resolution of the entire input image) is increased using the technique disclosed in Hwang.” Id. at 8–9; accord id. at 9 (“[T]he reference to ‘high resolution’ in Hwang is in reference to the high resolution zoomed and IPR2020-00479 Patent 6,349,154 B1 22 magnified image, which is only a portion of the entire image.”), 11 (“A [person of ordinary skill in the art] would have understood that the zoomed image compensator 63 outputs just the zoomed image, not the entire input image.”). Hwang states that motion detector 61 “receives an image signal P(x;n) as an input,” detects motion from the image signal, and outputs motion signals V(n) and v(n). Ex. 1005, 3:29–32. As Patent Owner notes, Hwang does not state a specific size (e.g., number of pixels) for image signal P(x;n). PO Resp. 10; Sur-reply 3. Area detector 62 receives this image signal P(x;n) and motion vector V(n) and “detects a zoom area in accordance with a zoom magnification factor m.” Ex. 1005, 3:33–37. According to Patent Owner, Hwang does not disclose a value or range for zoom magnification factor m. PO Resp. 10. Hwang’s zoomed image compensator 63 receives the signal output by area detector 62 and “spatially magnifies a sampling interval of the detected image.” Ex. 1005, 3:38–40. Patent Owner argues that this “detected image” is “the input images” and that the “sampling interval” is “the zoomed in area” of the input images, and not the entire input image. PO Resp. 10–11. See also Sur-reply 3 (“As explained in [Patent Owner’s] Response, Hwang’s zoom image compensator 63 spatially magnifies just a ‘sampling interval’ (i.e., the zoomed in area) of the ‘detected images’ (i.e., IPR2020-00479 Patent 6,349,154 B1 23 the input images) and outputs the ‘zoomed image,’ not the entire ‘input image.’”).5 Hwang continues by describing that image interpolator 64 receives the known pixel information corresponding to the spatially zoomed image and extends the image by adding subpixels next to the known pixels. Ex. 1005, 3:44–49. Patent Owner argues that “[i]mage interpolator 64 receives the ‘zoomed image,’ not the entire input image,” and that Because . . . the extension into subpixel resolution is only performed on the spatially zoomed image p(i-v(v);n), not the entire image, and because values for magnification factor are not taught, there is no teaching in Hwang that the number of pixels in the resultant image signal q(i;n) is higher than the input images. PO Resp. 12. Patent Owner constructs an example that it argues shows how Hwang’s system would operate, with an input image of 1616 pixels (256 pixels) and a zoom magnification factor m of 2. Id. According to Patent Owner, in its example, the sampling interval would be an 88 region of the original 1616 image, that sampling interval would be magnified by a 5 At the oral argument, Patent Owner argued that Hwang’s description of a “zoom area” also supports its argument that Hwang does not teach magnifying the entire input image. Tr. 31:17–23. Patent Owner does not present this argument in its papers or provide any meaningful explanation of it. Hwang does not define or describe “zoom area” other than to say that area detector 62 “detects a zoom area in accordance with a zoom magnification factor.” Ex. 1005, 3:33–37. Hwang’s mention of a “zoom area” does not describe identifying a subset of an input image to magnify. Tr. 27:20–23 (admitting that Hwang does not describe a zoom area less than the entire input image). Moreover, as we explain below, the zoom magnification factor does not identify a subset of an image to magnify. Thus, Hwang’s association of the zoom area and zoom magnification factor suggests that a zoom area is not a subset of an input image. IPR2020-00479 Patent 6,349,154 B1 24 factor of 2, and the resulting output image would be 1616 pixels (256 pixels), the same size as the input image. Id. at 12–14. Patent Owner argues that if the zoom magnification factor was 4, the output image would be 88 pixels (64 pixels), a lower number of pixels than the input image. Id. at 14. Petitioner responds by arguing that Hwang’s entire image is magnified, and that the output image has more pixels than the input image because pixels are added at subpixel locations next to the pixels from the input image. Reply 5–10 (citing Ex. 1002 ¶¶ 62–69; Ex. 1022 ¶¶ 7–9). Petitioner points to description in Hwang that “[w]hen magnifying an image, the number of pixels for displaying the image increases,” and other examples in which an input image is “magnified” without any indication that the description is limited to only a portion of the input image. Id. at 11–13 (citing Ex. 1005, 1:15–18, 2:33–42; Ex. 1022 ¶¶ 5–6). Petitioner takes issue with Patent Owner’s understanding of a sampling interval, arguing that it is the distance between pixels and not a portion of the input image to be magnified. Id. at 10–11. We agree with Petitioner. Hwang does not state the numbers of pixels in the input and output images of its preferred embodiment. However, Hwang, in its background, explains that “[w]hen magnifying an image, the number of pixels for displaying the image increases.” Ex. 1005, 1:15–17. According to Hwang, “[g]enerally, methods to form high resolution magnified images from small-sized low resolution images employ an interpolation algorithm which only uses information from spatially adjacent pixels in a static image.” Id. at 1:19–22. Although these statements in Hwang are in its background, and do not expressly describe the preferred embodiment, they evidence an understanding in Hwang that increasing resolution from low to high involves increasing the number of pixels from IPR2020-00479 Patent 6,349,154 B1 25 the low resolution image to the high resolution image. In the preferred embodiment, Hwang’s system does this by interpolating what a pixel value likely would be between two known pixels, based on information from a previous image and the estimated motion between the images, and adding that pixel to the high-resolution output. Id. at 3:29–59. Hwang further explains that, “[i]n conventional digital zooming systems, the method used to spatially magnify image data is responsive only to a partial area of the input image. Therefore, spatial resolution is inversely proportional to a scale factor of a screen. This degrades image quality and limits magnification.” Id. at 2:23–27. Hwang’s preferred embodiment, however, is described as an improvement that does not suffer from the limitations of such conventional systems. Id. at 2:33–35 (“It is another object of the present invention to provide a digital zooming system which magnifies an input image without degrading image quality.”), 6:42–50 (“As described above, the present invention can obviate the limitations of spatial resolution in a normal CCD having a limited number of pixels, without requiring an enhanced high resolution CCD or a high power optical zoom lens. Image information between pixels is determined by detecting a motion component across a dynamic image. Accordingly, a high resolution image is obtained and the degradation and noise generated when magnifying the input image by interpolation are reduced.”). This further suggests that Hwang’s technique can operate on an entire input image rather than limiting it to a partial area of the input image. Although Patent Owner argues that Hwang expressly describes only magnifying a “sampling interval” of an input image (PO Resp. 10–14; Sur- reply 3), this is a misstatement of Hwang’s teachings. Hwang defines a “sampling interval” as “a distance between pixels.” Ex. 1005, 1:33–35 (“As IPR2020-00479 Patent 6,349,154 B1 26 shown in FIG. 1, an image is sampled by a regular sampling interval (namely, a distance between pixels) about an input image.”). This is shown, for example, in Figure 1 of Hwang, reproduced below: Figure 1 of Hwang is a diagram illustrating the principles for sampling a one-dimensional image signal of an input image. Id. at 2:55–56. As can be seen, the sampling interval is the distance from one pixel (e.g., x) to the next (e.g., x+1). The zoom magnification factor m is the number of sub-pixels that will be created between each pixel and the next. This can be seen in Figure 5, reproduced below: IPR2020-00479 Patent 6,349,154 B1 27 Figure 5 of Hwang is a diagram showing a principle for interpolating an image signal by a convolution process using a sampled input image and a rectangle function. Id. at 2:66–3:2. According to Hwang, Image interpolator 64 receives the known pixel information corresponding to the spatially zoomed image p(i-v(n);n) from the zoomed image compensator 63 as an input, performs a convolution between the known pixel information and a rectangle function, and thus extends the image resolution range to subpixels adjacent to the known pixel. Id. at 3:43–48. Figure 5 states that “Pixel width (dx) = Subpixel width(di) m.” Here, the width of a pixel is equal to the width of a subpixel times the number of subpixels within the pixel, i.e., the zoom magnification factor m. Ex. 1022 ¶¶ 6–7. Thus, in essence, zoom magnification factor m can be used to calculate the number of additional pixels that will be created within a sampling interval. As noted above, Hwang describes that “[a]rea detector 62 . . . detects a zoom area in accordance with a zoom magnification factor m,” and “[z]oomed image compensator 63 receives the signal output from the area detector 62 as an input, [and] spatially magnifies a sampling interval of the IPR2020-00479 Patent 6,349,154 B1 28 detected image.” Id. at 3:33–40. In light of the foregoing, we find that Hwang describes detecting the amount by which to magnify an input picture (detecting a zoom area in accordance with a zoom magnification factor) and determining how many subpixels to add between each pixel and the next (magnifying a sampling interval of the detected image) for the entire input picture. Ex. 1022 ¶¶ 6–9 (expert testimony explaining that Hwang describes creating new pixels for an entire input image). This is the most natural reading of Hwang, and Patent Owner has introduced no persuasive evidence to suggest otherwise. While Petitioner’s arguments are supported by the expert testimony of Dr. Rodriguez, which we credit, Patent Owner offers no expert testimony or other evidence of its own to support its attorney argument. In particular, Patent Owner offers no evidence in Hwang, and we find none, suggesting that Hwang’s system first truncates the input picture or otherwise selects only a subset of that picture before spatially zooming the picture and adding subpixels to create an output picture. Nor does Patent Owner offer any testimony or other evidence to rebut the expert testimony offered by Petitioner on this point, which we find credible. Thus, upon consideration of the complete record, including Dr. Rodriguez’s testimony and the disclosure in Hwang, we find that Hwang describes receiving an entire input image, adding pixels to that image by an amount determined from the zoom magnification factor m, and outputting an image that has more pixels than the input image. Ex. 1005, 3:29–4:4; Ex. 1022 ¶¶ 6–9. This interpretation is consistent with the statement in Hwang’s background that “[w]hen magnifying an image, the number of pixels for displaying the image increases.” Ex. 1005, 1:15–17. IPR2020-00479 Patent 6,349,154 B1 29 Patent Owner also argues that a skilled artisan would have understood a difference between the terms “zoom” and “magnification.” As we understand Patent Owner’s argument, Patent Owner contends that “magnify” refers to increasing the size of something generally, which could include the whole image, while “zoom” refers to magnifying, or increasing the size, of only a portion of an image. PO Resp. 11 (“A POSITA would have understood that the zoomed image compensator 63 outputs just the zoomed image, not the entire input image.”) (citing Ex. 2007, 5:7–10, Fig. 5; Ex. 2008, 5:22–27, Fig. 3); Sur-reply 3–4 (“As explained in Uniloc’s Response, Hwang’s zoom image compensator 63 spatially magnifies just a ‘sampling interval’ (i.e., the zoomed in area) of the ‘detected images’ (i.e., the input images) and outputs the ‘zoomed image,’ not the entire ‘input image.’”); Tr. 22:23–24:5. Here, Patent Owner cites to the Loveridge and Fling patents (Exs. 2007, 2008). Although Loveridge and Fling both have pictures that show magnifying only a portion of an image (Figs. 5 and 3, respectively), the portions of these references cited by Patent Owner appear to use “magnify” and “zoom” interchangeably and without any connection to choosing only a portion of an input image. Ex. 2007, 5:7–10; Ex. 2008, 5:22–27. Thus, we do not read Loveridge or Fling to support Patent Owner’s position. In any case, we find that Hwang uses “zoom” and “magnify” interchangeably. See, e.g., Ex. 1005, 2:4–5, 2:39–49, 5:55, 8:48, 8:52. In sum, we find that Hwang teaches “receiving a sequence of lower- resolution pictures” (Ex. 1005, 3:29–32 (“Motion detector 61 receives an image signal P(x;n) as an input.”)); “estimating motion in said sequence of lower-resolution pictures with sub-pixel accuracy” (id. at 3:29–32 (“Motion detector . . . detects horizontal and vertical components of both a pixel IPR2020-00479 Patent 6,349,154 B1 30 motion and a subpixel motion, and outputs the pixel motion signal V(n) and subpixel motion signal v(n).”)); and “creating the high-resolution still picture from said sequence of lower-resolution pictures and said estimated motion” (id. at 3:33–54 (describing area detector 62, zoomed image compensator 63, image interpolator 64, and image synthesizer 65)). b) Combination of Hwang and Yoon Petitioner cites Yoon for “subjecting the sequence of pictures to motion-compensated predictive encoding, thereby generating motion vectors representing motion between successive pictures of said sequence,” as recited in claim 1. Pet. 34–38. According to Petitioner, “[w]hile Hwang, as discussed above, discloses that its motion detector 61 detects motion between the received sequence of image frames based on subpixel accuracy for the purpose of generating a high-resolution picture, Hwang does not disclose the details of how its motion detector 61 carries out such motion detection.” Id. at 34–35. Petitioner contends that Yoon describes a “well- known motion detection technique of subjecting the input sequence of pictures to motion-compensated predictive encoding, thereby generating motion vectors representing motion between successive pictures of said sequence,” and that a skilled artisan would have consulted Yoon for the details of Hwang’s motion detector 61. Id. at 35, 37–38, 43 (“A [person of ordinary skill in the art] would have understood that Yoon’s encoding functionality incorporated into Hwang’s system could replace the motion detector 61 of Hwang’s Figure 6 or be implemented as part of the motion detector 61.”), 49 (“Hwang does not disclose the details of how its motion detector 61 carries out such motion detection. Thus, a [person of ordinary IPR2020-00479 Patent 6,349,154 B1 31 skill in the art] would have been motivated to look elsewhere to implement Hwang’s technique for detecting motion with subpixel accuracy.”). Petitioner points to Yoon’s Figure 1 and corresponding description to show that Yoon’s system uses a motion-compensated prediction method to encode received images and generate motion vectors to a half-pixel accuracy in accordance with MPEG standards. Pet. 35–37 (citing Ex. 1007, 1:56– 2:24, Figs. 1, 2; Ex. 1002 ¶¶ 76–79), 47–49 (citing Ex. 1002 ¶¶ 92–93). Specifically, Petitioner contends that Yoon’s motion estimator 109, shown in Figure 1 (reproduced above), generates half-pixel resolution motion vectors, quantizer 103 outputs transformed and quantized image date, and entropy coder 104 combines the motion vectors and image data. Id. at 36–37 (citing Ex. 1007, 1:56–2:24, Figs. 1–2; Ex. 1002 ¶¶ 77–78). Petitioner contends that Yoon’s hybrid coder 100 (Figure 1) transmits the encoded image data along with the associated half-pixel resolution motion vectors, to decoder 200, and that decoder 200 (Figure 2) decodes the encoded differential image data based on the associated half-pixel resolution motion vectors to output decoded video (i.e., a sequence of pictures). Pet. 38 (citing Ex. 1007, 2:25–61). In Petitioner’s combination: Yoon’s conventional encoder would have subjected the input sequence of low-resolution pictures to motion-compensated predictive encoding to thereby generate motion vectors representing motion between successive pictures of the low- resolution picture sequence at a subpixel accuracy . . .; [and] the output from the conventional encoder and its processes (i.e., the encoded image data and the half-resolution motion vectors) would have then been transmitted over a communication channel (e.g., a limited bandwidth channel) to a location where the high resolution zooming processes of Hwang would have been carried out (e.g., the operations associated with area IPR2020-00479 Patent 6,349,154 B1 32 detector 62, zoomed image compensator 63, image interpolator 64, and image synthesizer 65). Pet. 43 (citing Ex. 1002 ¶¶ 77–78). Regarding “decoding said encoded pictures,” as recited in claim 1, Petitioner argues that, in its combination, Yoon’s decoder 200 would have received incoming encoded data from hybrid coder 100, separated the received data into encoded image data and associated half-pixel resolution motion vectors, and decoded the image data to produce uncompressed image data that would have been fed into Hwang’s area detector 62. Pet. 44, 54–56 (citing Ex. 1007, 2:25–61; Ex. 1002 ¶¶ 88, 100–104). Patent Owner responds that Yoon includes no teaching of how motion vectors are encoded in hybrid encoder 100. PO Resp. 35. Instead, Patent Owner argues, Yoon’s “Figure 1 depicts . . . block elements labeled ‘Motion Compensated Predictor 110’ and ‘Motion Estimator 109’ but there is no corresponding description in Yoon [](including in the First Paragraph [Ex. 1007, 1:28–54]) of how these two specific elements function.” PO Resp. 35. In Petitioner’s theory of Yoon, summarized above, Petitioner relies on the three paragraphs of description in Yoon, at column 1, line 28, through column 2, line 24, as being a continuous description of Yoon’s Figure 1. Pet. 34–38. Patent Owner disputes this, arguing that only the first paragraph (Ex. 1007, 1:28–54) describes Yoon’s Figure 1. PO Resp. 15–18. Patent Owner argues that the second paragraph (Ex. 1007, 1:56–67) describes an improvement upon the coder of Figure 1, which, in Patent Owner’s view, means that this paragraph describes something other than Figure 1. PO Resp. 18 (citing Ex. 1007, 1:56–57 (“The coding efficiency of the hybrid coder 100 can be improved by using a motion compensation or motion compensated prediction method.”)). Thus, Patent Owner argues, IPR2020-00479 Patent 6,349,154 B1 33 “the depiction of elements 110 (‘motion compensated predictor’) and 109 (‘motion estimator’) in Figure 1 are in error, and conflict with the written description the prior art hybrid decoder 100 of Yoon.” Id. As to the third paragraph (Ex. 1007, 2:1–24), Patent Owner argues that the third paragraph is limited to describing “unnamed ‘further enhancements’ in unnamed ‘some of the currently available schemes,’” rather than describing the system of Figure 1. PO Resp. 18–19 (quoting Ex. 1007, 2:1–3 (“In some of the currently available schemes such as ISO/IEC MPEG standards, for further enhancement of the coding efficiency, . . .”)). Patent Owner argues that the final sentence of the third paragraph confirms that it is a discussion of schemes described in other patents and publications rather than of Yoon’s Figure 1. Id. at 19 (citing Ex. 1007, 2:22–24). According to Patent Owner, the third paragraph’s description of motion vector generation and half-pixel accuracy should not be applied to the hybrid coder 100 described in the first paragraph. Id. We agree with Petitioner. Yoon’s first paragraph describing Figure 1 details the functions of several blocks of coder 100, but not the functions of motion compensated predictor 110 and motion estimator 109. Ex. 1007, 1:28–54. Yoon’s second paragraph transitions into a description of how the “coding efficiency of the hybrid coder 100 can be improved by using a motion compensation or motion compensated prediction method,” and describes a process of motion compensated prediction “based on an estimation of a translator motion of the block between the present and the previous frames.” Id. at 1:55–62. The natural reading of this description is that it is continuing the description of Figure 1 and, specifically, is explaining the functionality of the modules labeled in Figure 1 as “motion compensated predictor” 110 and “motion estimator” 109. In other words, IPR2020-00479 Patent 6,349,154 B1 34 the description of “an estimation of a translator motion of the block between the present and the previous frames,” Ex. 1007, 1:61–62, describes the functionality of motion estimator 109, which, as shown in Figure 1, receives as inputs a present frame and a frame stored in frame memory 108. Likewise, the description of “a process of predicting each block of pixels in a present frame from the pixels in its previous frame based on an estimation of a translatory motion of the block between the present and the previous frames,” id. at 1:59–62, describes the functionality of motion compensated predictor 110, which, as shown in Figure 1, takes as inputs a frame from frame memory 108 and the output of motion estimator 109. Yoon’s mention of improvement (id. at 1:55–56) is in reference to motion estimation making coder 100 more efficient, not a transition to a description of a separate embodiment. Patent Owner’s contrary reading is, at best, strained. As to the third paragraph (Ex. 1007, 2:1–24), it is a continuation of the description of Figure 1, and specifically continues to describe coder 100’s generation of motion vectors. This is confirmed by the next paragraph (id. at 2:25–28), which states that “[t]he half-pixel resolution motion vectors, together with the entropy coded data, are forwarded via the transmission channel to the associated decoder 200 for use in the conducting motion compensated prediction therein.” Here, Yoon describes packaging the entropy coded data with motion vectors and transmitting them to the decoder of Figure 2. This is consistent with Figure 1, which shows entropy coder 104 receiving frames from quantizer 103 and motion vectors from motion estimator 109, and outputting them “to decoder,” which we take as a reference to decoder 200 of Yoon’s Figure 2. Patent Owner’s arguments to the contrary have no merit. IPR2020-00479 Patent 6,349,154 B1 35 In sum, Yoon describes hybrid coder 100 performing motion compensated prediction and generation of half-pixel resolution motion vectors based on present and past input frames. Ex. 1007, 1:56–2:10, Fig. 1 (motion estimator 109, motion compensated predictor 110). We find that this teaches “subjecting the sequence of pictures to motion-compensated predictive encoding, thereby generating motion vectors representing motion between successive pictures of said sequence,” as recited in claim 1. We further find that Yoon’s description of decoder 200 receiving the output of coder 100 and decoding the received data into half-pixel resolution motion vectors and uncompressed image data teaches “decoding said encoded pictures,” as recited in claim 1. Ex. 1007, 2:25–61. As to “creating the high-resolution picture from said decoded pictures and the motion vectors generated in said encoding step,” as recited in claim 1, Petitioner contends that Yoon teaches that “[a] demultiplexor within the decoder e.g., like the demultiplexor 201 in the decoder 200 in Yoon’s Figure 2) would have received the incoming encoded data and separated the encoded image data and the associated half-resolution motion vectors.” Pet. 44. The image data and the motion vectors demultiplexed by Yoon’s decoder 200 would then have been fed as inputs to Hwang’s area detector 62 and the remaining image processing components of Hwang’s digital zooming system shown in Figure 6 (i.e., zoomed image compensator 63, image interpolator 64, and image synthesizer 65), which would have created a high-resolution picture. Id. at 56–62. As to this final limitation of claim 1, Patent Owner repeats its arguments, detailed above, that Yoon does not teach a hybrid coder that creates half-pixel resolution motion vectors. PO Resp. 36–38. These arguments do not rebut Petitioner’s showing for the reasons given above. IPR2020-00479 Patent 6,349,154 B1 36 We find that the combination of Hwang and Yoon teaches “creating the high-resolution picture from said decoded pictures and the motion vectors generated in said encoding step,” as recited in claim 1. In sum, on the complete record, we find that the combination of Hwang and Yoon teaches each limitation of claim 1. c) Reasons to combine Hwang and Yoon; reasonable expectation of success Petitioner contends that a skilled artisan would have combined Yoon’s teachings with those of Hwang such that Yoon’s video compression technology would have reduced the data required to transmit video over a limited-bandwidth communication channel in applications such as surveillance and remote sensing. Pet. 39 (citing Ex. 1002 ¶ 83; Ex. 1007, 1:13–23 (explaining that data compression was a known technique for reducing the volume of data to accommodate available bandwidth)), 51–53. Here, Petitioner relies on the uncontroverted testimony of Dr. Rodriguez, which we credit. Ex. 1002 ¶¶ 81–84. Petitioner also contends that MPEG encoding, described in Yoon, was a widely used encoding scheme, as the ’154 patent itself acknowledges. Id. at 39–40 (citing Ex. 1007, 1:23–2:61; Ex. 1001, 1:63–65, 4:57–60), 52–53. Dr. Rodriguez testifies that a skilled artisan “would have had the capability and knowledge to implement the above modification to Hwang while ensuring that Hwang’s image processing techniques performed properly and would have expected the combined Hwang-Yoon process to perform efficient and effective image/video processing without detracting from the processes disclosed by Hwang.” Ex. 1002 ¶ 98. We credit this testimony as well. IPR2020-00479 Patent 6,349,154 B1 37 Patent Owner does not specifically challenge Petitioner’s proposed reasons to combine Hwang and Yoon. Rather, Patent Owner argues that Petitioner’s “grounds and the factual support in . . . Dr. Rodriguez’s Declaration are all based on a level of ordinary skill in the art that is higher and more detailed than is warranted based on the record,” that Petitioner “relies heavily on Dr. Rodriguez to demonstrate key gaps in the prior art and motivation to combine,” and that Dr. Rodriguez’s “opinions should be given no weight for the reasons noted above.” PO Resp. 25. For the reasons given above, Petitioner’s and Dr. Rodriguez’s statement of the level of skill in the art is correct, and we credit Dr. Rodriguez’s testimony. Thus, Patent Owner’s arguments do not rebut Petitioner’s showing. On the complete record, we find that a skilled artisan would have had reasons, with rational underpinning, to combine Hwang and Yoon, and that the skilled artisan would have had a reasonable expectation of success. d) Dependent claims 2–4 Claim 2 depends from claim 1 and adds “the step of storing the encoded pictures on a storage medium.” Dr. Rodriguez testifies that, in an MPEG encoder/decoder system, such as shown in Yoon’s Figures 1 and 2, encoded pictures necessarily would have been stored in buffers. Ex. 1002 ¶¶ 120–121; Pet. 62–65. Dr. Rodriguez relies on a textbook describing the MPEG standard (Joan L. Mitchell et al., MPEG VIDEO COMPRESSION STANDARD (1996), 363–64, 370–71 (Ex. 1006, “Mitchell textbook”)). Petitioner contends that the combined system of Hwang and Yoon would have had such a buffer. Pet. 66–67; Ex. 1002 ¶¶ 122–123. We credit Dr. Rodriguez’s uncontroverted testimony and find that Hwang and Yoon teach the additional limitation of claim 2. IPR2020-00479 Patent 6,349,154 B1 38 Claim 3 depends from claim 1 and adds “wherein the step of encoding the sequence of pictures comprises the use of an MPEG encoder which is arranged to produce a sequence of I and P-pictures.” Dr. Rodriguez testifies that an MPEG-compliant encoder, such as that described in Yoon’s Figure 1, necessarily would have produced a sequence of I and P pictures. Ex. 1002 ¶¶ 125–128; Pet., 67–71. Dr. Rodriguez relies on the Mitchell textbook (Ex. 1006, 56–58). We credit Dr. Rodriguez’s uncontroverted testimony and find that Hwang and Yoon teach the additional limitation of claim 3. Claim 4 depends from claim 1 and adds “wherein the creating step includes recursively adding, in the high-resolution domain, a current decoded picture to a previously created picture, said previously created picture being subjected to motion-compensation in accordance with the motion vector which is associated with the current decoded picture.” We find that Hwang’s image synthesizer 65, including adder 653, performs this function. Ex. 1005, 3:49–4:4, 5:36–53; Pet. 71–76; Ex. 1002 ¶¶ 130–138. Thus, we find that Hwang and Yoon teach the additional limitation of claim 4. Patent Owner does not challenge Petitioner’s allegations as to claims 2–4, other than to argue that claims 2–4 are patentable because they depend from claim 1. As explained above, claim 1 is not patentable. In sum, on the complete record, we find that the combination of Hwang and Yoon teaches each limitation of claims 2–4 and that a skilled artisan would have had reasons, with rational underpinning, to combine Hwang and Yoon, with a reasonable expectation of success IPR2020-00479 Patent 6,349,154 B1 39 4. Conclusion of Obviousness As explained above, the combination of Hwang and Yoon teaches each limitation of claims 1–4. Petitioner has introduced persuasive evidence that a skilled artisan would have had reasons to combine the teachings of Hwang and Yoon with a reasonable expectation of success. Patent Owner does not argue or introduce evidence of objective indicia of nonobviousness. In sum, upon consideration of all the evidence, we conclude that Petitioner has proved by a preponderance of the evidence that claims 1–4 would have been obvious over Hwang and Yoon. III. PATENT OWNER’S ARGUMENT UNDER ARTHREX Patent Owner argues that the Federal Circuit determined in Arthrex, Inc. v. Smith & Nephew, Inc., 941 F.3d 1320 (Fed. Cir. 2019), that Administrative Patent Judges were appointed unconstitutionally, but that the Federal Circuit’s remedy was inappropriate and insufficient to cure the constitutional violation. PO Resp. 39–42; Sur-reply 6–7. We decline to consider Patent Owner’s constitutional argument because the U.S. Supreme Court addressed this issue in United States v. Arthrex, Inc., 141 S. Ct. 1970 (2021). IPR2020-00479 Patent 6,349,154 B1 40 IV. CONCLUSION6 Petitioner has shown by a preponderance of the evidence that claims 1–4 would have been obvious over Hwang and Yoon. V. ORDER In consideration of the foregoing, it is hereby: ORDERED, based on a preponderance of the evidence, that claims 1– 4 are unpatentable; and FURTHER ORDERED, because this is a final written decision, the parties to this proceeding seeking judicial review of our Decision must comply with the notice and service requirements of 37 C.F.R. § 90.2. 6 Should Patent Owner wish to pursue amendment of the challenged claims in a reissue or reexamination proceeding subsequent to the issuance of this decision, we draw Patent Owner’s attention to the April 2019 Notice Regarding Options for Amendments by Patent Owner Through Reissue or Reexamination During a Pending AIA Trial Proceeding. See 84 Fed. Reg. 16,654 (Apr. 22, 2019). If Patent Owner chooses to file a reissue application or a request for reexamination of the challenged patent, we remind Patent Owner of its continuing obligation to notify the Board of any such related matters in updated mandatory notices. See 37 C.F.R. § 42.8(a)(3), (b)(2). Claims 35 U.S.C. § Reference(s)/ Basis Claims Shown Unpatentable Claims Not shown Unpatentable 1–4 103 Hwang, Yoon 1–4 Overall Outcome 1–4 IPR2020-00479 Patent 6,349,154 B1 41 FOR PETITIONER: Naveen Modi Joseph E. Palys Quadeer A. Ahmed Paul Hastings LLP naveenmodi@paulhastings.com josephpalys@paulhastings.com quadeerahmed@paulhastings.com PH-Google-UnilocIPR@paulhastings.com FOR PATENT OWNER: Ryan Loveless Brett Mangrum James Etheridge Brian Koide Jeffrey Huang Etheridge Law Group ryan@etheridgelaw.com brett@etheridgelaw.com jim@etheridgelaw.com brian@etheridgelaw.com jeff@etheridgelaw.com Copy with citationCopy as parenthetical citation