Ex Parte PfisterDownload PDFPatent Trial and Appeal BoardMar 7, 201612840649 (P.T.A.B. Mar. 7, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 12/840,649 07/21/2010 26574 7590 03/09/2016 SCHIFF HARDIN, LLP PA TENT DEPARTMENT 233 S. Wacker Drive-Suite 6600 CHICAGO, IL 60606-6473 FIRST NAMED INVENTOR Marcus Pfister UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. Pl0,0153 (26965-4518) 1507 EXAMINER IMPINK, BRADLEY GERALD ART UNIT PAPER NUMBER 3768 NOTIFICATION DATE DELIVERY MODE 03/09/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): patents-CH@schiffhardin.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte MARCUS PFISTER1 Appeal2014-001608 Application 12/840,649 Technology Center 3700 Before DONALD E. ADAMS, JEFFREY N. FREDMAN, and JACQUELINE T. HARLOW, Administrative Patent Judges. PERCURIAM DECISION ON APPEAL This is an appeal under 35 U.S.C. § 134(a) involving claims to a method for visualizing aortal stent placement. The claims are rejected as obvious. We have jurisdiction under 35 U.S.C. § 6(b ). We AFFIRM. 1 According to Appellant, the Real Party in Interest is Siemens Aktiengesellschaft (App. Br. 1). Appeal2014-001608 Application 12/840,649 STATEMENT OF THE CASE The Specification describes a method for visualizing stent placement that purports to reduce contrast use relative to prior methods (3:21--4: 12). Claims 1 and 3-7 are on appeal. Claim 1 is illustrative and reads as follows (emphasis added): 1. A method for visualizing placement of a stent in an aorta of a patient with reduced use of contrast agent, comprising the steps of: providing a 3D volume image of the aorta of the patient from a CT scan of the patient before placing the stent; providing an angiography system with a C-arm to take 2D images of the patient; providing a computer having registration software for registering the 3D volume image and 2D images taken by the angiography system; performing a first segmentation on the 3D volume image to segment the aorta from remaining parts of the 3D image; performing a second segmentation on the 3D volume image to segment a bony structure of the patient from remaining parts of the 3D volume image; using the angiography system obtaining a first 2D image of the aorta from a first direction with use of a contrast agent; using the angiography system obtaining a second 2D image of said bony structure from a second direction but without use of contrast agent; registering the segmented aorta in the 3D volume image to the C-arm of the angiography system to create a registered 3D volume image by registering the first 2D image to the segmented aorta and registering the second 2D image to the segmented bony structure; and placing the stent in the aorta while observing on said angiography system a continuous plurality of additional 2D images taken when placing the stent by said angiography system and which are superimposed on the registered 3D volume image. 2 Appeal2014-001608 Application 12/840,649 Claims 1 and 3-7 stand rejected under 35 U.S.C. § 103(a) as being obvious based on the combination of Shina2 and Bertrams3 alone, or alternatively, in further combination with Srinivasan. 4 Findings of Fact FF 1. Shina discloses A method of estimating a position of a foreign object in the body, comprising: (a) providing a 3D data set of at least one blood vessel; (b) acquiring at least one 2D projection image of the vessel including the object; ( c) registering the 2D projection image of the vessel to the 3D data set; and ( d) using the registration to estimate a 3D position of said object restricted to be in a blood vessel, according to said 3D data set. (Shina Abstract; see also Ans. 3.) FF 2. Shina discloses that a 3D data set, for example a multi-slice CT data set (MSCT), is registered to a single or multi-plane projection image acquired during catheterization and used to generate a 3D position of a catheter or other foreign object (optionally movable) that is inside a body. Alternatively or additionally, data from the 3D data set and from the image are combined, for example, to generate an enhanced presentation or to correct for errors in data. (Shina i-f 8; see also Ans. 3.) FF 3. Shina discloses that the projection image(s) is an angiography image ... registered to a CT data set, for example a data set of the heart. [A] vascular tree of the heart is extracted from the CT data set and optionally used to find a correct transformation that projects the 3D data set 2 Shina, US 2006/0036167 Al, issued Feb. 16, 2006. 3 Bertrams et al., WO 2008/065581 A2, published June 5, 2008. 4 Srinivasan et al., Segmentation Techniques for Target Recognition, 1 INTERNATIONAL JOURNAL OF COMPUTERS AND COMMUNICATIONS 3:75-81 (2007). 3 Appeal2014-001608 Application 12/840,649 onto the projection image. Optionally, the projection image and the data set are registered based on a correspondence of the vascular tree and the angiographic image. (Shina i-f 10; see also Ans. 3.) FF 4. Shina discloses a stent delivery catheter that may be applied to the abdominal aorta using an apparatus that includes a 3D imager and a C- arm imager, and that "a computer is programmed and/or circuitry built to carry out the method" along with "registration software for registering CT data to projection data" (see Shina i-fi-1 12, 73, 82, 83, 179, 183, 186; see also Ans. 3--4). FF 5. Shina discloses that the better structural definition in a 3D data set (such as CT) is used to complement better spatial resolution but otherwise limited data sets, for example, as in IVUS (where much data may be missing and/or be at an unknown position) and/or angiographic images (where the projection hides structural information). Optionally, it is the segmentation of the 3D data set \vhich is used to complement (or be enhanced by) the other imaging modality. (Shina i-f 17.) FF 6. Shina discloses (a) providing a 3D data set of at least one blood vessel; [](b) acquiring at least one 2D projection image of the vessel including the object; [](c) registering the 2D projection image of the vessel to the 3D data set; and []( d) using the registration to estimate a 3D position of said object restricted to be in a blood vessel, according to said 3 D data set. (Id. at i-fi-120-23; see also Ans. 4.) FF 7. Shina discloses "enhancing said 3D data set with said 2D projection image," in which "enhancing comprises enhancing multiple 4 Appeal2014-001608 Application 12/840,649 projection images" and "enhancing comprises adding angiography data from said projection image to said 3D data set" (Shina i-f 32; see also Ans. 3). FF 8. Shina discloses that "the 3D image data set is analyzed to identify blood vessels[,]" and that "the blood vessels are extracted from the data as a vascular tree" (Shina i-f 91; see also Ans. 4, 7). Shina further discloses that that the "tree extraction includes generating a model for other parts of the body, such as a heart," that "the image of the heart is used to assist a physician in orienting himself in the body," and that "additional extraction of data is performed" to prevent disorientation (Shina i-f96; see also Ans. 4, 7). FF 9. Shina discloses that "[i]n some cases, however, it is desired to further minimize exposure to radiation and/or contrast material" and that "the catheter/tip position is used also for advancing of a catheter, optionally without contrast material, for example, using the vascular tree and [] low- level radiation imaging" (Shina i-fi-1 126, 177; see also Ans. 3, 6). FF 10. Bertrams 5 discloses an apparatus for determining a position of a first object [] within a second object[]. The apparatus []comprises a provision unit[] for providing a three-dimensional model [] of the second object[]. A projection unit [] generates a two-dimensional projection image [] of the first object [] and of the second object[], and a registration unit[] registers the three-dimensional model [] with the two-dimensional projection image[]. (Bertrams Abstract; see also Ans. 4--5.) 5 For convenience, the Examiner refers to the US equivalent of Bertrams, US 2010/0020161 Al, published Jan. 28, 2010 (Ans. 2-3). For purposes of this Appeal, we likewise cite to the US equivalent. 5 Appeal2014-001608 Application 12/840,649 FF 11. Rertrams discloses that "the second object[] is segmented within the three-dimensional image, and the segmented second object [] is visualized by, for example, surface rendering yielding a three-dimensional model of the second object" and that 2D-3D registration can, for example, be performed by using two two-dimensional projection images, which have been generated in different projection directions. The three-dimensional model of the second object [] can be sized, rotated and translated by means of known graphical techniques such that the three- dimensional model is aligned with one of the two-dimensional projection images by using "fiducial points" (image features that indicate certain anatomical structures or that are visible in both the two-dimensional projection image and the three-dimensional model) or by using an X-ray contrast medium injection to outline the shape of [] the heart chamber under consideration[], or by using [] the spine or ribs that can easily be seen in both, the two- dimensional projection image and the three-dimensional model of the second object[]. . . . [T]he projection unit is rotated preferably around 90°, and the three-dimensional model is rotated accordingly. The second of the two two-dimensional projection images corresponds to this rotated projection direction, i.e. the second two-dimensional projection image has been generated in this rotated projection direction, and the three- dimensional model is also aligned with respect to this second two-dimensional projection image. (Bertrams iii! 38, 40; see also Ans. 5.) FF 12. Srinivasan discloses "methodologies and algorithms for segmenting 2D images as a means in detecting target objects embedded in visual images for Automatic Target Detection/Recognition applications" (Srinivasan Abstract; see also Ans. 7-8). FF 13. Srinivasan discloses that "Automatic Target Detection/Recognition (ATDR) is an application of pattern recognition for image processing that detects and identifies types of target objects" and that 6 Appeal2014-001608 Application 12/840,649 "ATDR is a major objective in processing digital images for detecting, classifying, and tracking target objects embedded in an image" (Srinivasan 75, pt col.; see also Ans. 7-8). FF 14. Srinivasan discloses that the separation of an image into object[ s] and background is usually done by simplifying and/or changing the representation of an image by enhancing the visual representation of boundaries (lines, curves, etc.). This makes the object differentiation, isolation and detection task easier. The process is known as image segmentation. Image segmentation is one of the primary steps in image analysis for object [ d]etection, recognition and identification[.] (Srinivasan 75, 1st col.; see also Ans. 7-8.) FF 15. Srinivasan disclose that segmentation refers to the process of partitioning a digital image into multiple regions (sets of pixels). The result of image segmentation is a set of regions that collectively cover the entire image, or a set of contours extracted from the image. Each pixel within a region is uniquely similar with respect to some characteristic or computed property, such as color, intensity, or texture. Pixels from adjacent regions are significantly different with respect to the same characteristic( s ). (Srinivasan 75, cols. 1-2; see also Ans. 7-8.) Analysis Claims 1 and 3-5: We adopt the Examiner's findings of fact and reasoning regarding the scope and content of the prior art (Ans. 2-8; FF 1-15), and agree that the claims are obvious over Shina, Bertrams, and Srinavasan. We address Appellant's arguments below. Appellant contends that "Shina only has one 2D image and only one segmentation of a 3D volume image to segment an aorta and does not have a 7 Appeal2014-001608 Application 12/840,649 second segmentation to segment a bony structure" (App. Rr. 6). Appellant argues that the additional 3D volume segmentations disclosed by Shina do not suggest segmentation of an aorta and a bony structure (id. at 7-8; Reply Br. 3). Similarly, Appellant contends that Bertrams teaches segmentation of only one object within the 3D image (App. Br. 6). Appellant acknowledges Bertrams' teaching that the spine can be used as a fiducial point in the 2D and 3D images, but argues that Bertrams fails to disclose segmentation of the spine from the same 3D image as the aorta (id. at 7). Appellant likewise contends that Bertrams fails to disclose segmentation of the catheter tip in the 3D image (Reply Br. 1-2). We do not agree. Shina teaches "acquiring at least one 2D projection image" (FF 1, 6 (emphasis added)), and "enhancing said 3D data set with said 2D projection image," in which "enhancing comprises enhancing multiple projection images" (FF 7 (emphasis added)). Shina also teaches that "the blood vessels are extracted from the [3D image] data as a vascular tree," that the "tree extraction includes generating a model for other parts of the body," and that "additional extraction of data is performed" to prevent disorientation (FF 8 (emphasis added)). Srinivasan discloses that segmentation of images for target recognition is well known in the field (FF 12-15). Furthermore, Appellant's arguments do not account for Bertrams' contribution to the combination. Bertrams teaches that "the second object [] is segmented within the three-dimensional image," "using two two- dimensional projection images," and that the three-dimensional model is aligned with one of the two- dimensional projection images by using "fiducial points" (image 8 Appeal2014-001608 Application 12/840,649 features that indicate certain anatomical structures or that are visible in both the two-dimensional projection image and the three-dimensional model) or by using an X-ray contrast medium injection ... or by using [] the spine or ribs that can easily be seen in both, the two-dimensional projection image and the three-dimensional model ... (FF 11 (emphasis added).) See In re Keller, 642 F.2d 413, 425 (CCPA 1981). We agree with the Examiner that In order to accurately perform the registration process[,] one of ordinary skill would have recognized the need to first locate in the 3D data set the different structures which are to be registered with the 2D image data such as the aorta as in Shina and the spine and ribs as in Bertrams. One of ordinary skill in the art would have recognized that this could be done in either one segmentation step or in two separate segmentation steps with the end result being the same in that the different structures are identified and ready for subsequent processing. Furthermore, Shina even suggests extraction (i.e. segmenting the data of interest from the remaining portions of the image not useful/needed for the task at hand) of the vessel tree and other structures such as the heart which may serve to assist the physician in orienting himself when viewing the images in order to avoid disorientation. These additional segmentations could obviously have included the spine and ribs (as taught by Bertrams) which one of ordinary skill would have recognized as useful reference structures for providing orientation. Finally, in light of Srinivasan, it is well known to use segmentation for partitioning an image into multiple regions where each region shares some common characteristics which differ from other regions in order to differentiate and isolate different structures. In order to accurately perform a 2D-3D registration it would have been obvious to first segment the different structures (such as the spine and aorta) which again could obviously be done in a single step or multiple steps without materially affecting the process. (Ans. 10.) See In re Merck & Co., 800 F.2d 1091, 1097 (Fed. Cir. 1986) ("Non-obviousness cannot be established by attacking references 9 Appeal2014-001608 Application 12/840,649 individually where the rejection is based upon the teachings of a combination of references []. [The reference] must be read, not in isolation, but for what it fairly teaches in combination with the prior art as a whole."). Appellant also contends that Srinivasan does not teach that "the first segmentation is an aorta and the second segmentation is a bony structure" (App. Br. 7). Appellant argues that while Srinivasan teaches isolation of different objects, Srinivasan does not teach "a second segmentation of a 3D volume image to segment a bony structure of the patient and obtaining a second 2D image of said bony structure from a second direction without use of contrast agent" (Reply Br. 3). We do not find this argument persuasive as it fails to account for Bertrams' and Shina's contributions to the combination as discussed above. See In re Keller, 642 F.2d 413, 425 (CCPA 1981). Appellant further argues that if neither Shina nor Bertrams teaches two segmentations in the same 3D volume image, then it is not possible that the combination of Shina and Bertrams could teach two segmentations (App. Br. 7). Appellant contends that the combination of Bertrams and Shina would "only suggest providing a 2D image of a segmented aorta and a second non-segmented 2D image of a catheter tip 14 or a heart 13" but would not suggest "a second segmentation of a 3D volume image of a bony structure and no second 2D image of the bony structure without use of contrast agent" (Reply. Br. 2). Appellant argues that because the Examiner does not indicate where it is in the reference that Bertrams teaches registration of bony structures such as the spine or ribs, the Examiner erred in concluding that it would have been obvious to take a second 2D image of a bony structure without a contrast agent (id.). Appellant contends that while Shina teaches, at paragraph 126, minimizing exposure to contrast 10 Appeal2014-001608 Application 12/840,649 material, this is a broad non-specific teaching that would not lead one skilled in the art to take second 2D image of a bony structure without contrast agent (id. at 2-3). We are not persuaded. We first note that Bertrams teaches, at paragraph 38, that "the second object [] is segmented within the three- dimensional image," and at paragraph 40, that "[t]he 2D-3D registration can, for example, be performed ... using [] the spine or ribs that can easily be seen in both, the two-dimensional projection image and the three- dimensional model" (FF 11). We also note that Shina teaches "[i]n some cases, [] it is desired to further minimize exposure to radiation and/or contrast material[,]" and describes an embodiment where the position of a catheter is visualized "without contrast material" when using low-level radiation imaging (FF 9). Appellant has not provided any evidentiary basis on this record to support a conclusion that "it is not possible that the combination of Shina and Bertrams could teach two segmentations" (App. Br. 7) or that Shina "is a broad non-specific teaching that would not lead one skilled in the art to take second 2D image of a bony structure without contrast agent" (Reply Br. 2-3), especially in light of the teachings of Srinivasan concerning segmentation (FF 12-15). See In re Geisler, 116 F.3d 1465, 1470 (Fed. Cir. 1997) ("[A]ttomey argument [is] not the kind of factual evidence that is required to rebut a prima facie case of obviousness"). Claim 6: Appellant contends that "[i]ndependent claim 6 distinguishes at least for the reasons noted with respect to claim 1 and also by reciting that it is the abdominal aorta which is being segmented and the bony structure is the spine" (App. Br. 9; see also Reply Br. 4). As explained above, in our 11 Appeal2014-001608 Application 12/840,649 discussion of the bony structure, Rertrams discloses that the spine may be segmented and used for 2D-3D registration (FF 11; Ans. 10). In addition, we note that Shina explains that the disclosed imaging methods may be applied to the abdominal aorta (FF 4). Likewise, Appellant's argument that the cited art fails to disclose segmentation of both structures in combination (App. Br. 9) is unpersuasive for the reasons set forth above. Therefore, we affirm the rejection of claim 6. Claim 7: Appellant contends that "[ c ]laim 7 is similar to claim 1 but is broader since it does not recite the two segmentations, but does recite obtaining the second 2D image of a bony structure without use of contrast agent, a feature not taught by the references as discussed above" (Reply Br. 4; see also App. Br. 9). We have already addressed above the limitations concerning the bony structure and the non-use of a contrast agent. Therefore, we also affirm the rejection of claim 7. SUMMARY We affirm the rejection of claims 1, 6, and 7 under 35 U.S.C. § 103(a) based on Shina, and Bertrams alone and alternatively, Srinivasan. Claims 3- 5 fall with claim 1. TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 12 Copy with citationCopy as parenthetical citation