Conduent Business Services, LLCDownload PDFPatent Trials and Appeals BoardJun 23, 20202019002245 (P.T.A.B. Jun. 23, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/932,453 07/01/2013 Edgar A. Bernal 20130239-US-NP 6908 144570 7590 06/23/2020 Ortiz & Lopez, PLLC/Conduent P.O. Box 4484 Albuquerque, NM 87196-4484 EXAMINER HASAN, MAINUL ART UNIT PAPER NUMBER 2485 NOTIFICATION DATE DELIVERY MODE 06/23/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Conduent.PatentDocketing@conduent.com docketing@olpatentlaw.com klopez@olpatentlaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte EDGAR A. BERNAL and WENCHENG WU ____________ Appeal 2019-002245 Application 13/932,453 Technology Center 2400 ____________ Before CAROLYN D. THOMAS, JAMES B. ARPIN, and ADAM J. PYONIN, Administrative Patent Judges. ARPIN, Administrative Patent Judge. DECISION ON APPEAL Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner’s rejections of claims 44–63. Final Act. 2.2 Claims 1–43 are canceled. Id. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. 1 “Appellant” here refers to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real parties-in-interest as Conduent Business Services, LLC. Appeal Br. 2. 2 In this Decision, we refer to Appellant’s Appeal Brief (“Appeal Br.,” filed August 13, 2018) and Reply Brief (“Reply Br.,” filed January 18, 2019); the Final Office Action (“Final Act.,” mailed January 24, 2018) and the Examiner’s Answer (“Ans.,” mailed November 19, 2018); and the Specification (“Spec.,” filed July 1, 2013). Rather than repeat the Examiner’s findings and determinations and Appellant’s contentions in their entirety, we refer to these documents. Appeal 2019-002245 Application 13/932,453 2 STATEMENT OF THE CASE Appellant’s claimed methods, computer-usable media, and systems generally relate to enhancement of images obtained from video and, in particular, to vehicle-velocity aware image enhancement. Spec. ¶ 1. As noted above, claims 44–63 stand rejected. Claims 44, 55, and 59 are independent. Appeal Br. 25 (claim 44), 28 (claim 55), 29 (claim 59) (Claims App.). Claim 44 recites “[a] computer implemented method”; claim 55 recites “[a] non-transitory computer-usable medium for producing an enhanced image, the computer-usable medium embodying computer program code, the computer program code comprising computer executable instructions configured for” performing functions substantially as recited in claim 44; and claim 59 recites “[a] system comprising: a video camera producing video data wherein the video camera images a road,” which system performs functions substantially as recited in claim 44. Claims 45– 54 depend directly from claim 44, claims 56–58 depend directly or indirectly from claim 55, and claims 60–63 depend directly from claim 59. Id. at 25– 30. Claims 44, 52, 59, and 60, reproduced below with disputed limitations emphasized, are illustrative. 44. A computer-implemented method comprising: obtaining video data wherein a video camera imaging a road produced the video data; extracting from the video data an image of a vehicle wherein the vehicle is moving; determining a location wherein the location is the vehicle's location within the image of the vehicle; obtaining a velocity estimate of the vehicle’s speed and direction on the road; and Appeal 2019-002245 Application 13/932,453 3 producing an enhanced image based at least in part on the image and the velocity estimate wherein the enhanced image shows at least a portion of the vehicle. Id. at 25 (emphases added). 52. The computer-implemented method of claim 44 further comprising: estimating a point spread function in the frequency domain wherein the point spread function is based at least in part on the velocity estimate, a camera configuration, a capture parameter, and an assumption of linear motion; determining a complex conjugate of a frequency domain representation the point spread function; determining a frequency domain representation of the image; determining an image power spectral density that is the power spectral density of the image; and determining a noise power spectral density that is the power spectral density of additive noise; wherein the enhanced image is produced by frequency domain deconvolution expressed as a fraction having a numerator and a denominator, wherein the numerator comprises the product of the frequency domain representation of the image, image power spectral density, and the complex conjugate, and wherein the denominator comprises the sum of the noise power spectral density with the product of the squared magnitude of the complex conjugate and the image power spectral density. Id. at 27 (emphasis added). 59. A system comprising: a video camera producing video data wherein the video camera images a road; a first image wherein the system extracts the first image from the video data, and wherein the first image shows a vehicle moving on the road; Appeal 2019-002245 Application 13/932,453 4 a first location determined by the system wherein the location is the vehicle’s location within the first image; a velocity estimate of the vehicle’s speed and direction; and an enhanced image wherein the system produces the enhanced image based at least in part on the first image and the velocity estimate, and wherein the enhanced image shows at least a portion of the vehicle. Id. at 29 (emphases added). 60. The system of claim 59 comprising: extracting from the video data a plurality of additional images wherein the additional images are extracted from the video data, and wherein each of the additional images shows the vehicle; and a plurality of registered images produced by registering the image and the additional images; wherein the enhanced image is produced by averaging the registered images. Id. at 29–30 (emphases added). REFERENCES AND REJECTIONS The Examiner relies upon the following references in rejecting the claims: Name3 Number Publ’d Filed Lim US 2005/0019000 A1 Jan. 27, 2005 June 25, 2004 Lee US 2006/0177145 A1 Aug. 10, 2006 Feb. 7, 2005 King US 2007/0276600 A1 Nov. 29, 2007 Mar. 6, 2007 Wang US 2008/0166023 A1 July 10, 2008 Jan 7, 2008 Scofield US 2011/0106416 A1 May 5, 2011 Apr. 22, 2010 3 All reference citations are to the first named inventor only. Appeal 2019-002245 Application 13/932,453 5 Claim 60 is rejected under 35 U.S.C. § 112(a) as lacking adequate written description, and claim 59 is rejected under 35 U.S.C. § 112(b) as indefinite. Final Act. 2–5. Further, claims 44, 47, 49–55, 58–60, and 63 are rejected under 35 U.S.C. § 102(a)(1) as anticipated by Lee. Id. at 5–16. In addition, claim 45 is rejected under 35 U.S.C. § 103(a) as obvious over the combined teachings of Lee and King (id. at 16–17); claim 46 is rejected under 35 U.S.C. § 103(a) as obvious over the combined teachings of Lee and Scofield (id. at 17–18); claims 48 and 62 are rejected under 35 U.S.C. § 103(a) as obvious over the combined teachings of Lee and Lim (id. at 18– 20); and claims 56, 57, and 61 are rejected under 35 U.S.C. § 103(a) as obvious over the combined teachings of Lee and Wang (id. at 21–22). We review the appealed rejections for error based upon the issues identified by Appellant, and in light of the arguments and evidence produced thereon. Ex parte Frye, 94 USPQ2d 1072, 1075 (BPAI 2010) (precedential). Arguments not made are waived. See 37 C.F.R. § 41.37(c)(1)(iv). Unless otherwise indicated, we adopt the Examiner’s findings in the Final Office Action and the Answer with respect to the affirmed rejections as our own and add any additional findings of fact for emphasis. We address the rejections below. ANALYSIS A. Adequate Written Description of Claim 60 The Examiner finds that the Specification fails to provide an adequate written description for “a plurality of additional images,” as recited in claim 60, reproduced above. Final Act. 2–3. Section 112(a) provides: The specification shall contain a written description of the invention, and of the manner and process of making and using it, Appeal 2019-002245 Application 13/932,453 6 in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same . . . . (Emphases added.) As our reviewing court has explained, [t]he “written description” requirement implements the principle that a patent must describe the technology that is sought to be patented; the requirement serves both to satisfy the inventor’s obligation to disclose the technologic knowledge upon which the patent is based, and to demonstrate that the patentee was in possession of the invention that is claimed. Capon v. Eshhar, 418 F.3d 1349, 1357 (Fed. Cir. 2005) (emphasis added). The Examiner finds: Claim 60 recites “extracting from the video data a plurality of additional images”. After careful review of the originally filed specification, the Examiner could not find any mention of extracting plurality of additional images from the captured image. This has at the least resulted in a new matter situation. In P7, [0022], L1-3 of originally filed specification discloses “the image extraction module 131 can extract an image or sequence of images from the video data”, but nowhere in the specification, has it disclosed extracting a plurality of “additional” images. When an extraction module extracts a sequence of images from the video data, it is “the” extracted images. The specification is silent about where and why the “additional” images are extracted. For written description, the specification as filed must describe the claimed invention in sufficient detail so that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Final Act. 2–3 (emphases added). Appellant acknowledges that the term “additional images” does not appear in the Specification. Reply Br. 2. Nevertheless, Appellant points to numerous instances where the Specification discloses the extraction of multiple images from the video data, showing possession of the recited Appeal 2019-002245 Application 13/932,453 7 additional images. Appeal Br. 10–11 (citing Spec., Abstract, ¶¶ 16, 20, 22, 30, 33, 37); see also Spec., Fig. 3 (step 204 reciting “EXTRACT AN IMAGE OR SEQUENCE OF IMAGES FROM THE VIDEO DATA”). In particular, the Specification discloses, “[t]he image domain approach utilizes the output from the image extraction module 131 and the vehicle detection module 132 and applies feature tracking, optical flow or block matching algorithms in order to determine a vector field that describes the displacement of the target vehicle across the sequence of frames.” Spec. ¶ 24 (emphasis added). Claim 60 further recites that the additional images are registered, and an enhanced image is produced by averaging the registered images. Appeal Br. 29–30 (Claims App.). The Specification also discloses: Velocity-aware denoising is yet another manner of enhancement that is particularly well suited for low-light capture conditions, fast frame rates or short exposure times where the SNR of the individual frames is low. It relies on registration of adjacent frames and denoising via averaging and self-similarity methods. Knowledge of the vehicle velocity vector, along with the camera configuration and capture parameters provide an estimate of the relative translation, in pixels, between the low- resolution images of the moving vehicle across adjacent frames, which can then be used to perform image registration, prior to performing denoising via averaging. Id. ¶ 37 (emphasis added). Because the Specification is written for a person of ordinary skill in the relevant art, to satisfy the written description requirement, the Specification needs to provide sufficient disclosure to convince a person of ordinary skill in the relevant art that Appellant possessed the invention. LizardTech Inc. v. Earth Res. Mapping, Inc., 424 F.3d 1336, 1345 (Fed. Cir. Appeal 2019-002245 Application 13/932,453 8 2005); In re GPAC Inc., 57 F.3d 1573, 1579 (Fed. Cir. 1995). Appellant persuades us that, given the Specification’s disclosure, a person of ordinary skill in the relevant art would have understood that Appellant was in possession of an embodiment of the invention including “a plurality of additional images,” as recited in claim 60. See Appeal Br. 10–11; Reply Br. 2–3. Thus, we are persuaded that the Examiner erred in rejecting claim 60 for lack of adequate written description, and we do not sustain the Examiner’s rejection of claim 60 for lack of written description. B. Indefiniteness of Claim 59 The Examiner rejects claim 59, reproduced above, as indefinite. Final Act. 3–4. In particular, claim 59 recites a “system” comprising “a video camera producing video data wherein the video camera images a road.” Appeal Br. 29 (Claim App.). Claim 59 further recites “a first image” extracted by “the system,” “a first location determined by the system,” and “an enhanced image” produced by “the system.” Id. (emphases added). As the Examiner notes, “the elements of the system ‘a first image’, ‘a first location’ ‘a velocity estimate’ do not seem to be capable of performing any functions, other than they are the results of some function carried out by the system.” Final Act. 4 (italics added); see Ans. 6–7. 35 U.S.C. § 112(b) provides “[t]he specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.” “A decision on whether a claim is indefinite under 35 U.S.C. 112(b) . . . requires a determination of whether those skilled in the art would understand what is claimed when the claim is read in light of the specification.” MPEP § 2173.02 (emphasis added). Appellant contends: Appeal 2019-002245 Application 13/932,453 9 A person of ordinary skill in the art would understand the claim terms based on their plain and ordinary meaning. The claim terms at issue here are: “a first image[,]” “a first location,” “a velocity estimate,” and “an enhanced image.” . . . It is clear that “a first image” is an image of a vehicle on a road and that has been extracted from the video data. It is clear that “a first location” is the location of the vehicle in the first image as determined by the claimed system. It is clear that “a velocity estimate” is an estimate of the vehicle's speed and direction. It is clear that an “enhanced image” is an image produced by the system based at least in part on the first image and the velocity estimate, and wherein the enhanced image shows at least a portion of the vehicle. Appeal Br. 12–13; Reply Br. 4. However, Appellant misunderstands the rejection. The Specification describes systems that perform the functions recited in claim 59. Spec. ¶¶ 16–20, Figs. 1 (system 100), 2 (system 106). The Specification explains that the components of these systems can perform the recited functions. For example, the Specification explains: Embodiments of the systems and methods disclosed herein generally include a video capture module 130 configured to receive image data of the scene being monitored, an image extraction module 131 configured to extract still images from incoming video data, a vehicle detection module 132 that detects the approximate location of a target vehicle in the scene being monitored, a vehicle-velocity determination module 133 configured to determine the amplitude and direction of a vector that describes the velocity of the target vehicle in image pixel coordinates, and a vehicle-velocity-aware enhancing module 134 configured to enhance the image(s) of the target vehicle extracted from the video feed based on the vehicle's speed, direction of motion, location within the camera view and image capture parameters. Appeal 2019-002245 Application 13/932,453 10 Id. ¶ 20 (emphases added); see id. ¶¶ 16 (“The system 100 consists of at least an image capture device 102 being operably connected to a data processing device 106 capable of executing modules and being operably connected to a database 104 via a network 120, or other wired or wireless means of connection.”), 17 (“The system 106 comprises a central processor 108, a main memory 110, and an input/output controller 112.”). However, the Specification does not describe that a system comprising only a video camera can perform the recited functions, or that information such as images, location, and velocity, themselves, are part of the system. Thus, we are not persuaded that the Examiner erred in the indefiniteness rejection of claim 59, and we sustain the Examiner’s indefiniteness rejection of claim 59. C. Anticipation by Lee “A claim is anticipated only if each and every element as set forth in the claim is found, either expressly or inherently described, in a single prior art reference.” Verdegaal Bros., Inc. v. Union Oil Co., 814 F.2d 628, 631 (Fed. Cir. 1987). The elements must be arranged as required by the claim, but this is not an ipsissimis verbis test. See In re Bond, 910 F.2d 831, 832 (Fed. Cir. 1990). Moreover, “it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom.” In re Preda, 401 F.2d 825, 826 (CCPA 1968). The Examiner rejects claims 44, 47, 49–55, 58–60, and 63 as anticipated by Lee. Final Act. 5–16. Appellant challenges this rejection of (1) independent claim 44, (2) dependent claims 49 and 60, and (3) dependent claims 52 and 58, separately. Appeal Br. 13–20. For the reasons given Appeal 2019-002245 Application 13/932,453 11 below, we sustain the Examiner’s anticipation rejection of claims 44, 47, 49–51, 53–55, 59, 60, and 63, but we do not sustain that rejection of claims 52 and 58. 1. Independent Claim 44 The Examiner finds that Lee discloses each and every element of claim 44 as set forth in that claim. Final Act. 5–6. The Examiner makes corresponding findings with respect to claims 47, 50, 51, 53–55, 59, and 63. Id. at 6–16. Because Appellant does not challenge the anticipation rejection of those claims separately from claim 44, except for our ultimate decision, we do not address the merits of the rejection of those claims further herein. See 37 C.F.R. § 41.37(c)(1)(iv). With respect to claim 44, Appellant challenges the Examiner’s rejection for two reasons. First, Appellant contends Lee teaches measuring the velocity of an image of a vehicle, not of the vehicle itself. Appeal Br. 13–14; Reply Br. 4–5. Second, Appellant contends Lee discloses examining the motion of objects, which are blocks in the image, not parts of the vehicle. Appeal Br. 14–15; Reply Br. 5. We disagree. Appellant contends, “[n]one of claims recite the ‘vehicle’s image’s velocity’ or the ‘speed and direction of an image of the vehicle.’ The claimed velocities are those of a physical thing, not of that thing’s image. The office action incorrectly confuses images of things with the things themselves.” Appeal Br. 14. Referring to Lee’s Figure 1, however, Lee discloses: The captured image 105 includes an image 110 of a car that is in motion along a roadway that has one edge 130 visible in the image 105. Because the car is in motion, the image 110 of the car is actually spread out slightly as depicted by the second Appeal 2019-002245 Application 13/932,453 12 outline 115 of the image of the car, with the amount of spread determined by at least the speed of the car and an exposure time for the image. The license plate 120 also has an image 125 that is spread out similarly to the image of the car, as depicted by the second outline 130, because the license plate 120 has the same motion as the car. Lee ¶ 15 (emphases added); see Final Act. 5–6; Ans. 7–8. Thus, Lee discloses that the image is of a car “on the road” (Appeal Br. 25 (Claims App.)), and a person of ordinary skill in the art would understand that Lee discloses determining the motion of the car “along a roadway” (Lee ¶ 15). Moreover, Lee discloses that the amount of depicted spread is determined “by at least the speed of the car.” Id. Thus, we are persuaded that Lee discloses, “obtaining a velocity estimate of the vehicle’s speed and direction on the road,” i.e., a vector, as recited in claim 44. Appeal Br. 25 (Claim App.); Lee ¶ 15, Fig. 3 (step 315); see also Lee ¶ 2 (describing the importance of “license plate recognition”). Referring to Lee’s Figure 3, Appellant contends: It is clear that Lee’s object regions are rectangles in images and that his motion vectors are image space indicators of the movement of the rectangles. Lee’s motion vectors are not the speed and direction of a vehicle, particularly not the direction on a road (another physical object). The rejections must be withdrawn because Lee’s motion vectors are not estimates of a vehicle’s speed and direction. Appeal Br. 15. Contrary to Appellant’s contention, however, Lee discloses, “[e]xamples of such an object are a license plate and a sign on a side of a vehicle such as a railway car.” Lee ¶ 17; see id. ¶ 2. As noted above, Lee discloses “the license plate 120 has the same motion as the car.” Id. ¶ 15. Thus, we are not persuaded the Examiner erred in finding Lee discloses, “obtaining a velocity estimate of the vehicle’s speed and direction on the Appeal 2019-002245 Application 13/932,453 13 road; and producing an enhanced image based at least in part on the image and the velocity estimate wherein the enhanced image shows at least a portion of the vehicle.” Final Act. 5–6 (emphases added; citing Lee, Figs. 1– 5). Consequently, we are not persuaded the Examiner erred in rejecting claim 44 as anticipated by Lee, and we sustain the rejection of claims 44, 47, 50, 51, 53–55, 59, and 63. 2. Dependent Claims 49 and 60 Claim 49 recites extracting from the video data a plurality of additional images of the vehicle; and performing image registrations of the image of the vehicle and the additional images of the vehicle wherein the image registrations are based at least in part on the velocity estimate; wherein the enhanced image is produced by averaging the image of the vehicle and the additional images of the vehicle. Appeal Br. 26 (Claims App.) (emphases added). Claim 60, reproduced above, recites corresponding elements. Id. at 30. Appeal 2019-002245 Application 13/932,453 14 Figures 6A and 6B of the application are reproduced below. Figures 6A and 6B depict a super-resolution application. Spec. ¶ 12. The Specification explains: Super-resolution algorithms typically combine multiple images with small amounts of relative motion in order to create a single high resolution image. Super-resolution is typically carried out in two steps, image registration and image reconstruction. Velocity-aware multi-frame super-resolution is another manner of enhancement wherein multiple low-resolution (typically temporally adjacent) frames of the vehicle are captured, and knowledge of the vehicle velocity aids registration of the frames on the higher resolution lattice. Knowledge of the vehicle velocity vector, along with the camera configuration and capture parameters, can lead to an estimate of the relative translation, in pixels, between the low-resolution images of the moving vehicle across adjacent frames, which can then be used to perform image registration prior to performing super- resolution. Consider the super-resolution application illustrated in FIGS. 6A-6B, where the temporal progression of 3x3 pixel neighborhood across three frames adjacent in time (namely frames n-1, n and n+1) is considered. In this particular case, a Appeal 2019-002245 Application 13/932,453 15 final image with twice the resolution as the individual frame is obtained. With reference to FIG. 6A, displacement vector d1 describes the relative position between like image locations of frame n-1 (solid circles) and frame n (solid squares), while displacement vector d2 describes the relative position between like image locations of frame n (solid squares) and frame n+1 (solid triangles) computed from some known vehicle velocity according to the teachings herein. Note that some image features for which correspondences are found (solid circles and triangles) are not directly measured, and thus have to be inferred from features which are measured, that is, features that fall on the image lattice (empty circles and triangles). Also note that, in order for super-resolution to be effective, the estimated velocity of the vehicle must be such that displacement vectors have non- integer values relative to the lattice on which the video frames at least along the row or the column directions. . . . With reference to FIG. 6B, registration of corresponding features (solid circles, squares, and triangles) results in slight displacement of measured features (empty circles and triangles) so that measured features fall outside the low-resolution lattice of the original video frames. Consequently, high-resolution image reconstruction can be achieved by inferring the values of image features that fall on a high-resolution lattice (empty squares) from the known image features (empty circles and triangles and filled squares). Such inference can be achieved via known interpolation, regression, fitting, model- or training-based methods, for example. In one embodiment and with reference to high-resolution image region 610, the value of unknown sample 611 can be inferred from a weighted combination of known values 612, 613, 614, and 615. Other embodiments could consider smaller or larger neighborhoods of known samples. Once the high resolution image is estimated, additional regularization and filtering techniques that enforce reconstruction consistency and sharpness constraints can be employed. Appeal 2019-002245 Application 13/932,453 16 Spec. ¶¶ 33, 34, 36. Thus, we understand claims 49 and 60 describe registering, i.e., overlaying, the first image and additional images, i.e., multiple frames, to obtain an enhanced image, i.e., an image with super- resolution, based on the velocity estimate, i.e., using displacement vectors. Velocity-aware denoising “relies on registration of adjacent frames and denoising via averaging and self-similarity methods.” Id. ¶ 37. Lee’s Figures 4 and 5 are reproduced below. Figures 4 and 5 are graphical representations of two compressed video frames captured by a video camera, including a lattice or grid numbered on the vertical axis and alphabetized on the horizontal axis. Lee ¶ 7. With respect to claim 49’s “extracting” and “performing” steps, the Examiner finds Lee’s Figures 4 and 5 or Figures 6 and 7 disclose “the two images registered on a lattice or grid, whereas in Fig. 3 it shows that if the motion vector is not significant, step 320, the scaling factor generation, de-blurring operation are skipped, meaning the registering of the images for de-blurring process is dependent on the velocity estimate.” Final Act. 7. The Examiner further finds Lee “discloses averaging the motion vectors of the object Appeal 2019-002245 Application 13/932,453 17 regions in the first image and the second image to determine the motion vector of the second object region.” Id. (citing Lee ¶ 18 (discussing image compression with respect to Figs. 4 and 5)). In particular, Lee discloses: When an object region is determined to have occurred within the target image at step 305, the image capture process continues in some embodiments by proceeding to step 310, in which a determination is made of a second object region that includes the object-of-interest in a second image that has been captured by the camera. In some embodiments, the images are successive frames of a series of video frames taken at a periodic rate. In other embodiments, the pictures may not be periodic frames. . . . At step 315, a motion vector of the object-of-interest is determined using at least the first image. In some embodiments, both steps 305 and step 310 are performed and the motion vector is determined as being a displacement vector that indicates the displacement between the centroids of the first and second object regions, using techniques known to those of ordinary skill in the art. When the second frame is captured after the first frame, the displacement vector is typically determined as the displacement of the centroid of the second object region with reference to the centroid of the first object region. Lee ¶ 17. Thus, Lee’s de-blurring technique overlays multiple frames according to vector displacements, which are based on car motion, to achieve improved resolution of objects in the images. Id. ¶¶ 18–26. Appellant contends that the Examiner erred for two reasons. First, Appellant contends Lee does not use the terms “registration” or “lattice” or link those terms to the de-blurring operation of Lee’s Figure 3. Appeal Br. 16–17. Second, Appellant contends, “‘registrations’ cannot relate to any velocity estimate because, as described above, Lee never develops a velocity estimate but instead uses an unrelated motion vector.” Id. at 17. With regard to the first reason, as noted above, anticipation is not an ipsissimis verbis test. The Examiner finds that Lee discloses registration and lattice Appeal 2019-002245 Application 13/932,453 18 consistent with the recitation in the claims and the disclosure in the Specification. Ans. 13–14 (“In this instance the plain meaning of registering is merely the image captured and acquired for processing, as described in Lee et al.”). Moreover, we are persuaded that “lattice” is adequately disclosed in Lee’s Figures 4, 5, 6, and 7. Id. at 13 (“Whereas, Lee et al. in Figs. 4-7 clearly shows the image registration (acquisition) on a lattice with dots and squares/rectangles.”). Lee need not use the precise words to disclose the claim recitations. As to the second reason, we addressed that teaching of Lee above with respect to the rejection of claim 44. Thus, Appellant does not persuade us that the Examiner erred in rejecting claim 49 or 60 as anticipated by Lee, and we sustain the rejection of those claims. 3. Dependent Claims 52 and 58 Claims 52 and 58 recite an algorithm in words that is disclosed in symbols in the Specification. In particular, Claim 52 recites wherein the enhanced image is produced by frequency domain deconvolution expressed as a fraction having a numerator and a denominator, wherein the numerator comprises the product of the frequency domain representation of the image, image power spectral density, and the complex conjugate, and wherein the denominator comprises the sum of the noise power spectral density with the product of the squared magnitude of the complex conjugate and the image power spectral density. Appeal Br. 27 (Claims App.) (emphasis added); see id. at 28–29 (claim 58). The Specification explains: In scenarios where, in addition to blur, there is additive noise, that is, f(m,n) = g(m,n) * h(m,n) + η(m,n), where η(m,n) denotes additive noise, the operation that leads to the minimization of the original unblurred image and the recovered unblurred image is given by the inverse frequency decomposition of G(u,v) = H*(u,v)Sg(u,v)F(u,v)/[|H*(u,v)|2Sg(u,v) + Sη(u,v)] Appeal 2019-002245 Application 13/932,453 19 where H*(u,v) denotes the complex conjugate of H*(u,v),[4] Sg(u,v) denotes the power spectral density of g(m,n) and Sη(u,v) denotes the power spectral density of η(m,n). Spec. ¶ 32 (emphasis added). Lee discloses: The de-blurring filter may be chosen from known types of deconvolution filters, of which two are given below: an inverse filter and a Weiner filter. Both are based on the above point spread function using a frequency transformed version of the point spread function, as is known to one of ordinary skill in the art. Lee ¶ 36. In particular, Lee discloses the following algorithm for the Weiner filter: Id. ¶ 40. The Examiner finds Lee’s application of a Weiner filter to a frequency domain representation of the image discloses this algorithm. Final Act. 9– 10; Ans. 17–18. Thus, Lee discloses the recited algorithm. In particular, the Examiner finds that the denominator of Appellant’s algorithm is identical to the denominator of Lee’s algorithm. Ans. 17. We disagree. Appellant’s algorithm includes the function: |H*(u,v)|2 in the denominator, but Lee’s algorithm includes the function: |H(u,v)|2 in the denominator. Ans. 17; Spec. ¶ 32; Lee ¶ 40. Thus, even accepting the 4 This appears to be a typographical error. We understand that H*(u,v) denotes the complex conjugate of H(u,v). See Lee ¶ 42. Appeal 2019-002245 Application 13/932,453 20 Examiner’s assertion that Lee’s Weiner filter is applied to a frequency domain representation of the image, Lee does not disclose the algorithm recited in claims 52 and 58. Consequently, we are not persuaded the Examiner demonstrates Lee anticipates claims 52 and 58, and we do not sustain the rejection of those claims. In summary, we are persuaded the Examiner erred in rejecting claims 52 and 58 under 35 U.S.C. § 102(a)(1) as anticipated by Lee, but did not err in rejecting claims 44, 47, 49–51, 53–55, 59, 60, and 63 under 35 U.S.C. § 102(a)(1) as anticipated by Lee. Consequently, we sustain the anticipation rejection of claims 44, 47, 49–51, 53–55, 59, 60, and 63, but do not sustain the anticipation rejection of claims 52 and 58. D. Obviousness Over Lee and King, Schofield, Lim, or Wang 1. Dependent Claim 45 Claim 45 recites, in the method of claim 44, “the velocity estimate is obtained from a microwave detector.” Appeal Br. 25 (Claims App.). As noted above, the Examiner rejects claim 45 as obvious over the combined teachings of Lee and King. Final Act. 16–17. In particular, the Examiner finds that Lee does not teach or suggest obtaining a velocity estimate from a microwave detector. Id. at 17. Nevertheless, the Examiner finds that, in a vehicular collision warning system, King teaches detecting speed with a microwave detector. Id. (citing King, pgs. 4 (Table 2), 5 (Table 3)). Moreover, the Examiner finds “[a]t the time the invention was made, it would have been a matter of design choice to a person of ordinary skill in the art to use a microwave detector for velocity measurement because Applicant has not disclosed that a microwave detector is used for a particular purpose, or solves a stated problem.” Id. Appeal 2019-002245 Application 13/932,453 21 The Examiner does not suggest that a person of ordinary skill in the art would add a microwave detector as an additional means of estimating velocity. Id. at 16–17. The Examiner suggests that a person of ordinary skill in the art would choose Lee’s image comparison method or King’s microwave detector to estimate velocity as “a matter of design choice.” Appellant contends the Examiner erred in rejecting this claim because Lee teaches using images to determine object motion and to substitute determining motion based on a comparison of images with determining speed of a vehicle based on a microwave detector would change a principle of Lee’s operation. Appeal Br. 22; see MPEP § 2143.01(VI) (“If the proposed modification or combination of the prior art would change the principle of operation of the prior art invention being modified, then the teachings of the references are not sufficient to render the claims prima facie obvious.”). The Examiner contends that the Specification indicates that the claimed method uses adjacent frames to estimate velocity. Ans. 20 (citing Spec. ¶ 33). The Examiner notes: Thereafter, in claim 45 which is dependent upon claim 44, the Appellant is claiming to estimate the vehicle velocity with the help of a microwave detector. Which means the Appellant is switching from an image pixel based velocity vector estimation of the speed of the vehicle to a microwave detector based velocity estimation. Id. We disagree. Unlike Lee, Appellant’s method of claim 44 only requires extracting a first image and does not specify how vehicular velocity is estimated. See id. at 25 (Claims App.). The Examiner attempts to read the embodiment disclosed in the Specification’s Paragraph 33 into the claims. However, the Appeal 2019-002245 Application 13/932,453 22 Specification makes clear that various means may be employed to estimate velocity. Specifically, the Specification states: Detection may be accomplished via foreground and/or motion detection techniques such as background estimation and subtraction, temporal double differencing and optical flow. Alternatively, non-video-based triggering devices already in the system such as an inductive loop detector embedded in a roadway, a magnetometer detector, a laser based detector or a microwave detector are capable of relaying vehicle location information, usually in real-world coordinates. Spec. ¶ 22. We are persuaded that the Examiner erred in combining the teachings of King with those of Lee because the proposed combination is inapplicable to Lee’s image de-blurring system (see Lee, Abstract), and would change the principle of Lee’s operation. Reply Br. 7–8. Consequently, we do not sustain the obviousness rejection of claim 45. 2. Dependent Claim 46 Claim 46 recites, in the method of claim 44, “the velocity estimate is assumed according to historical speed averages.” Appeal Br. 25 (Claims App.). As noted above, the Examiner rejects claim 46 as obvious over the combined teachings of Lee and Scofield. Final Act. 17–18. In particular, the Examiner finds that Lee does not teach or suggest assuming an estimated velocity according to historical speed averages. Id. at 18. Nevertheless, the Examiner finds that, in a vehicular collision warning system, Scofield teaches assuming estimated velocity according to historical averages. Id. (citing Scofield ¶ 26). Specifically, the Examiner finds Scofield teaches, “a traffic condition estimation based on historical data, where it teaches using a historical speed average on a certain road for a vehicle’s speed estimation.” Id. Moreover, the Examiner finds: Appeal 2019-002245 Application 13/932,453 23 It would have been obvious for one of ordinary skill in the art at the time the invention was made to combine Lee et al’s invention of velocity estimate based image de-blurring to include Scofield et al’s usage of historical speed averaging, because the average reported speed may be estimated for some or all points on the road portion, and error estimates (or “error bars”) around an average reported speed for a point may further be generated (Scofield et al.; [0024], L8-16), which means it is possible to determine how much the estimated speed of the vehicle is off from actual speed. Id. Appellant contends that the Examiner erred in making this rejection for essentially the same reasons stated with respect to the rejection of claim 45. Appeal Br. 21–22. We agree. We are persuaded that the Examiner erred in combining the teachings of Scofield with those of Lee because the proposed combination would change the principle of Lee’s operation. Reply Br. 7–8. Consequently, we do not sustain the obviousness rejection of claim 46. 3. Dependent Claims 48 and 62 Claim 48 recites, in the method of claim 44, the method further comprises the steps of: extracting from the video data a second image of the vehicle; and registering the image and the second image on a lattice, wherein the lattice is higher resolution than the image and the second image; wherein the enhanced image is a super-resolution image constructed on the lattice from the video data. Appeal Br. 25–30 (Claims App.). Claim 62 depends from claim 59 and recites corresponding limitations. Id. at 29–30. As noted above, the Appeal 2019-002245 Application 13/932,453 24 Examiner rejects claim 48 and 62 as obvious over the combined teachings of Lee and Lim. Final Act. 18–19. In particular, the Examiner finds that Lee “does not explicitly teach generating a super-resolution image from the constituent image frames.” Id. at 19. Nevertheless, the Examiner finds that, Lim teaches “a system where a super-resolution image is generated from plurality of lower resolution images as described in [0036]-[0041], where the images are registered on a lattice as shown in Figs. 1, 4.” Id. Moreover, the Examiner finds: It would have been obvious for one of ordinary skill in the art at the time the invention was made to combine Lee et al’s invention of velocity estimate based image de-blurring to include Lim et al’s usage of generating super-resolution image from lower resolution images, because it has multiple advantages over the low resolution imaging system, for example, low cost, existing LR imaging system can be leveraged, and can be used in many applications (Lim et al.; [0008]). Id. Appellant does not challenge the rejection of these claims separately from its challenge to the anticipation rejection of claims 49 and 60, discussed above. See supra Section C.2. For the reasons given above, we are not persuaded that the Examiner erred in finding that Lee and Lim teach or suggest all of the limitations of claims 48 and 62 and that a person of ordinary skill in the relevant art would have had reason to combine the teachings of these references in the manner proposed by the Examiner. Therefore, we sustain the obviousness rejection of claims 48 and 62. 4. Dependent Claims 56, 57, and 61 Claim 56 recites, in the computer-usable medium of claim 55, “computer executable instructions configured for receiving vehicle location Appeal 2019-002245 Application 13/932,453 25 information from a non-video-based triggering device wherein the vehicle location information is in real world coordinates.” Appeal Br. 28 (Claims App.). Claim 57 depends from claim 56 and recites that the medium further comprises “computer executable instructions configured for mapping the vehicle location information into pixel coordinates.” Id. Claim 61 recites limitations corresponding to those of claim 56. Id. at 30. As noted above, the Examiner rejects claim 56, 57, and 61 as obvious over the combined teachings of Lee and Wang. Final Act. 21–23. In particular, the Examiner finds that Lee “does not explicitly teach gathering vehicle location information from a non-video-based triggering device and the location is in real world coordinate system.” Id. at 21. Nevertheless, the Examiner finds, Wang teaches a system in the same field of endeavor, where it automatically derives moving vehicle from image frame, but derives vehicle from a reference markers on the road, which is non-video-based triggering mechanism (Wang; Abstract, L7- 10), wherein the location information is in real world coordinates (Wang; [0058], L2-4; Fig. 4). Id. (emphasis added). Moreover, the Examiner finds: At the time the invention was made, it would have been a matter of design choice to a person of ordinary skill in the art to use a non-video-based triggering mechanism because Applicant has not disclosed that a non-video-based triggering mechanism is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected Applicant’s invention to perform equally well with using a non- video-based triggering mechanism because of mere design choice. Id. Appeal 2019-002245 Application 13/932,453 26 The Specification explains, non-video-based triggering devices already in the system such as an inductive loop detector embedded in a roadway, a magnetometer detector, a laser based detector or a microwave detector are capable of relaying vehicle location information, usually in real-world coordinates. Camera calibration techniques described herein can then be used to map this location information into pixel coordinates. Spec. ¶ 22. However, Wang states, “[t]he system automatically detects moving vehicles in each image frame and derives vehicle positions from a projective mapping established from reference markers on the road.” Wang, Abstract (emphasis added). Referring to Wang’s Figure 1, Wang explains, “for each image the software engine retrieves from the tracking camera, it is possible to reconstruct the parallelogram region defined by the four reference markers 104 on the road 102.” Id. ¶ 58 (emphases added); see id. ¶ 44 (“In FIG. 2, the four cones 114 represent the four reference points (equivalent to markers 104 in FIG. 1) that define a rectangular region along the road 102. As discussed above, the width and length of the road segment defined by the cones 114 are measured and entered into the software engine for calibration of the tracking camera.”). Thus, the reference markers are imaged by the camera and used to convert the location of the vehicle from image coordinates to real-world coordinates. See id., Fig. 4. Therefore, the reference markers are video-based triggering devices, not non-video-based triggering devices, as recited in claims 56, 57, and 61. Appeal Br. 22–23; see Reply Br. 8. In view of the Specification’s description of “non-video-based triggering devices,” we are persuaded that the Examiner erred in finding the combined teachings of Lee and Wang teach or suggest the “non-video-based Appeal 2019-002245 Application 13/932,453 27 triggering device,” as recited in claims 56, 57, and 61. Consequently, we do not sustain the obviousness rejection of claims 56, 57, and 61. In summary, we are persuaded the Examiner erred in rejecting claims 45, 46, 56, 57, and 61 under 35 U.S.C. § 103 as rendered obvious by the teachings of Lee in combination with one of King, Scofield, or Wang, but did not err in rejecting claims 48 and 62 under 35 U.S.C. § 103 as rendered obvious by the combined teachings of Lee and Lim. Consequently, we sustain the obviousness rejection of claims 48 and 62, but do not sustain the obviousness rejections of claims 45, 46, 56, 57, and 61. DECISIONS 1. The Examiner erred in rejecting claim 60 under 35 U.S.C. § 112(a) as lacking written description. 2. The Examiner did not err in rejecting claim 59 under 35 U.S.C. § 112(b) as indefinite. 3. The Examiner erred in rejecting claims 52 and 58 under 35 U.S.C. § 102(a)(1) as anticipated by Lee, but did not err in rejecting claims 44, 47, 49–51, 53–55, 59, 60, and 63 under 35 U.S.C. § 102(a)(1) as anticipated by Lee. 4. The Examiner erred in rejecting claim 45 under 35 U.S.C. § 103 as obvious over the combined teachings of Lee and King. 5. The Examiner erred in rejecting claim 46 under 35 U.S.C. § 103 as obvious over the combined teachings of Lee and Scofield. 6. The Examiner did not err in rejecting claims 48 and 62 under 35 U.S.C. § 103 as obvious over the combined teachings of Lee and Lim. 7. The Examiner erred in rejecting claim 56, 57, and 61 under 35 U.S.C. § 103 as obvious over the combined teachings of Lee and Wang. Appeal 2019-002245 Application 13/932,453 28 8. Thus, on this record, claims 45, 46, 52, 56–58, and 61 are not unpatentable. CONCLUSION Because we affirm at least one rejection of each of claims 44, 47–51, 53–55, 59, 60, 62, and 63, we affirm the Examiner’s decision rejecting those claims. For the reasons given above, on this record, we reverse the Examiner’s decision rejecting claims 45, 46, 52, 56–58, and 61. In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 60 112(a) Written Description 60 59 112(b) Indefiniteness 59 44, 47, 49–55, 58–60, 63 102(a)(1) Lee 44, 47, 49– 51, 53–55, 59, 60, 63 52, 58 45 103 Lee, King 45 46 103 Lee, Scofield 46 48, 62 103 Lee, Lim 48, 62 56, 57, 61 103 Lee, Wang 56, 57, 61 Overall Outcome 44, 47–51, 53–55, 59, 60, 62, 63 45, 46, 52, 56–58, 61 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED-IN-PART Copy with citationCopy as parenthetical citation