KONINKLIJKE PHILIPS N.V.Download PDFPatent Trials and Appeals BoardDec 8, 20202020001464 (P.T.A.B. Dec. 8, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/035,929 05/11/2016 RANJITH NAVEEN TELLIS 2013P01380WOUS 3554 24737 7590 12/08/2020 PHILIPS INTELLECTUAL PROPERTY & STANDARDS 465 Columbus Avenue Suite 340 Valhalla, NY 10595 EXAMINER SEREBOFF, NEAL ART UNIT PAPER NUMBER 3626 NOTIFICATION DATE DELIVERY MODE 12/08/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): katelyn.mulroy@philips.com marianne.fox@philips.com patti.demichele@Philips.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte RANJITH NAVEEN TELLIS, THUSITHA DANANJAYA DE SILVA MAGOTUWANA, and YUECHEN QIAN ____________ Appeal 2020-0014641 Application 15/035,929 Technology Center 3600 ____________ Before HUBERT C. LORIN, NINA L. MEDLOCK, and MATTHEW S. MEYERS, Administrative Patent Judges. MEYERS, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Appellant2 appeals under 35 U.S.C. § 134(a) from the Examiner’s final rejection of claims 1–14 and 20, which constitute all the claims pending in this application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM-IN-PART. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Our decision references Appellant’s Appeal Brief (“Appeal Br.,” filed September 11, 2019) and Reply Brief (“Reply Br.,” filed December 16, 2019), and the Examiner’s Answer (“Ans.,” mailed October 21, 2019) and Final Office Action (“Final Act.,” mailed April 29, 2019). 2 Appellant identifies “KONINKLIJKE PHILIPS N.V.” as the real party in interest. Appeal Br. 2. Appeal 2020-001464 Application 15/035,929 2 CLAIMED INVENTION Appellant’s claimed invention relates to “[a] method for automatically setting image viewing context.” Spec. ¶ 3. Claims 1, 11, and 20 are the independent claims on appeal. Claim 1, reproduced below with bracketed matter and emphasis added, is illustrative of the claimed subject matter: 1. A method for automatically setting image viewing context, comprising: [(a)] extracting image references and body parts associated with the image references from a report; [(b)] mapping each of the body parts to an image viewing context so that image references associated are also associated with the image viewing context, said image viewing context including display settings for displaying an image; [(c)] receiving a user selection indicating an image to be viewed; [(d)] determining whether the user selection is one of the image references associated with the image viewing context based on the extracted image references and body parts associated with the image references from the report; [(e)] displaying the image of the user selection; [(f)] receiving a user input updating the image; and [(g)] displaying the updated image based on a specific user profile. REJECTIONS Claims 1–14 and 20 are rejected under 35 U.S.C. § 112(a) as failing to comply with the written description requirement (Final Act. 2–4). Claims 1–3, 7–13, and 20 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Moriya (US 2012/0183188 A1, pub. July 19, 2012) (id. at 4–7). Appeal 2020-001464 Application 15/035,929 3 Claims 4–6 and 14 are rejected under 35 U.S.C. § 103(a) as being obvious over Moriya (id. at 7–8). ANALYSIS Written Description The Examiner rejects independent claims 1, 11, and 20 as failing to comply with the written description requirement because “[t]he [S]pecification does not disclose how the system knows that a selected image is ‘associated with the image viewing context based on the extracted image references and body parts associated with the image references from the report’” (Final Act. 3). According to the Examiner, “[t]he claimed ‘based on’ algorithm is not disclosed” (id.). The Examiner rejects dependent claims 2–10 and 12–14 for the same reasons (id. at 4). Appellant argues that the claim language is described in the Specification at least in paragraphs 3, 8, and 10–14 (Appeal Br. 5). Responding to Appellant’s arguments in the Answer, the Examiner explains that the issue: is whether the disclosure describes how the system determines the relationship between “the user selection is one of the image references associated with the image viewing context” given some “extracted image references and body parts” that are somehow “associated with the image references from the report.” The Examiner notes the breadth of the claim language and the lack of specificity within the disclosure. Ans. 3. Whether a specification complies with the written description requirement of 35 U.S.C. § 112, first paragraph (now 35 U.S.C. § 112(a)), is a question of fact and is assessed on a case-by-case basis. See, e.g., Purdue Pharma L.P. v. Faulding, Inc., 230 F.3d 1320, 1323 (Fed. Cir. 2000) (citing Vas-Cath Inc. v. Mahurkar, 935 F.2d 1555, 1561 (Fed. Cir. 1991)). The Appeal 2020-001464 Application 15/035,929 4 disclosure, as originally filed, need not literally describe the claimed subject matter (i.e., using the same terms or in haec verba) in order to satisfy the written description requirement. But the specification must convey with reasonable clarity to those skilled in the art that, as of the filing date, Appellant was in possession of the claimed invention. See id. Here, the Specification discloses an algorithm in prosaic terms that a person of ordinary skill in the art would understand indicates the inventor had possession of the claimed subject matter. For example, paragraph 3 of the Specification discloses that the method includes: extracting image references and body parts associated with the image references from a report, mapping each of the body parts to an image viewing context so that image references associated [with body parts] are also associated with the image viewing context, receiving a user selection indicating an image to be viewed, determining whether the user selection is one of the image references associated with the image viewing context and displaying the image of the user selection. As an example, the Specification indicates that “image viewing context” may correspond to “a window width/level in which the associated image is to be viewed” (Spec. ¶ 8). The Specification further discloses an exemplary method of storing the associations between (1) body parts, (2) image references, and (3) image viewing context (id. ¶ 8). More specifically, paragraph 10 discloses that “[b]ased on the extracted image information, the processor 102 can automatically select a window width/level in which a finding (e.g., mass) in the image should be viewed” using “look-up table 114 stored in the memory 108 which maps body parts/anatomy to window width/level settings.” In light of the above disclosure, we agree with Appellant (Appeal Br. 5) that the Specification provides adequate written description support Appeal 2020-001464 Application 15/035,929 5 for claims 1, 11, and 20. Accordingly, the rejection of claims 1, 11, and 20 and dependent claims 2–10 and 12–14 is not sustained. Anticipation Independent Claims 1, 11, and 20 and Dependent Claims 2, 7–10, 12, and 13 Appellant argues claims 1, 2, 7–13, and 20 as a group (see Appeal Br. 6–9; Reply Br. 4–7). We select independent claim 1 as representative. Claims 2, 7–13, and 20 stand or fall with independent claim 1. See 37 C.F.R. § 41.37(c)(1)(iv). We are not persuaded by Appellant’s argument that the Examiner erred in rejecting independent claim 1 under 35 U.S.C. § 102(a)(1) because Moriya fails to disclose the subject matter of claim 1 (see Appeal Br. 6–9; Reply Br. 4–7). Instead, we agree with, and adopt, the Examiner’s findings and rationales as our own (see Final Act. 4–7; Ans. 7–8). We add the following discussion for emphasis. In rejecting claim 1, the Examiner finds that Moriya discloses all of the limitations of claim 1 (Final Act. 4–5). In particular, the Examiner finds that Moriya discloses user input of lesion positions via image finding field 105, that lesions are linked to an image viewing context by hyperlinks, selection of an image via image selection button 119b, extraction of the organ to which the inputted position of the lesion area belongs, and display of the selected image (id. (citing Figures 5, 7–9, and ¶¶ 89–91, 95, 97, 99, 101, 104, 110–120, 126, 154, 163, 164 of Moriya)). Focusing on limitation (d) of claim 1, Appellant argues that paragraphs 101, 104, 163, and 164, on which the Examiner relies, “merely Appeal 2020-001464 Application 15/035,929 6 disclose extracting the organs belonging to the lesion area received from the user input using a specific extraction method” (Appeal Br. 7 (citing Moriya ¶ 101)). According to Appellant, Moriya’s “system displays a medical image based on the user input using a display condition determination means wherein the image is displayed based on the template that is associated with the organ from the user input” (id. (citing Moriya ¶¶ 163, 164)). Appellant argues that “[t]here is no disclosure of Moriya that focuses on determining an image viewing context for a selected image after image references and their associated body parts have been extracted from a report” and “these paragraphs of Moriya are concerned with extracting organs that belong to the position of the lesion area that is entered into the system” (id.). We are not persuaded of error in the Examiner’s rejection. As best understood, Appellant does not argue that the organs and lesions extracted by Moriya’s system are not “body parts” as recited in claim 1. Nor does Appellant make any attempt to distinguish the claimed “image viewing context” from Moriya’s display conditions. Indeed, Appellant acknowledges (Appeal Br. 7) that Moriya’s system displays images based on a template associated with the extracted organ. Appellant does not point to any limitation of claim 1 that excludes “extracting organs that belong to the position of the lesion area that is entered into the system” (id.). We are similarly unpersuaded by Appellant’s argument that “[t]here is simply no discussion within Moriya that is related to determining an image viewing context that is related to a user selection” (Appeal Br. 8). With reference to Figure 5, Moriya discloses that when a “thumbnail 121A is selected from a plurality of thumbnails representing medical images V in reference image area 117,” “medical image 115 corresponding to thumbnail 121A is displayed in detailed image display area 113” (Moriya Appeal 2020-001464 Application 15/035,929 7 ¶ 93). Moriya’s system then allows a user to input the position of a link input lesion, using a crosshair pointer to input the position of a lesion using a mouse, for example (id. ¶ 95). Lesion position is stored in connection with a lesion character 105A received via finding field 105 (id. ¶ 97). Moriya discloses that the system additionally stores “information to identify a medical image that includes a lesion position,” “(4) an anatomical region name of a lesion position such as left upper lung lobe, left lingular segment, or the like,” “(5) coordinate values of a lesion position,” or “a combination thereof” (id. ¶ 98). Moriya discloses that the organ to which the inputted lesion position belongs is extracted by computer aided diagnosis (CAD), including specific CAD techniques for extracting lung, livers, bones, and hearts (id. ¶ 99). Moriya further discloses that “any other organ recognition technique may be used as long as it is capable of automatically extracting the organ to which the specified lesion position belongs” (id.). More particularly, Moriya discloses that “an association table of position of lesion area and lesion character candidate associating 203 organ names, characteristic information of positions of lesion areas, and lesion character candidates with each other is provided” (id. ¶ 100), as shown in Figure 13 of Moriya. Lesion characters are stored in lesion storage means 20 (id. ¶ 103), and “[a] medical image that includes the position 115A of lesion area obtained from input section 303 and the lesion character 105A linked by a hyperlink are inserted in finding field 105” (id. ¶ 104). Subsequently, where the lesion character . . . is selected by input section 303, that is, if the lesion character 105A is selected, for example, by selection tool 123 shown in FIG. 5 or the like, medical image display means 60 obtains position information of a medical image 115 corresponding to the lesion character 105A from lesion storage means 20 and displays the medical image 115 in detailed image display area 113 Appeal 2020-001464 Application 15/035,929 8 (id. ¶ 107). In particular, Moriya discloses that “link information display means 40 obtains a position 115A of lesion area from lesion storage means 20 and displays the position 115A of lesion area corresponding to the lesion character 105A in the medical image in the detailed image display area” (id.). Moriya further discloses that a three-dimensional image is reconstructed from multiple slice images, i.e., multiple two-dimensional images sliced at a predetermined thickness, coordinates of the position of a point specified in any of the two-dimensional images in the three- dimensional image may be calculated from the slice position and coordinates of the point, and the coordinate position in the three-dimensional image corresponding to the coordinates of the lesion position in the slice image is identified. (id. ¶ 116). In other words, the lesion position stored as described above in association with multiple two-dimensional images is used to reconstruct a three-dimensional image containing the lesion. Subsequently, a “thumbnail 117B of a reference image in reference image area is selected and an image 133, which is a reconstructed second medical image corresponding to the selected thumbnail image 117B, is displayed in detailed image display area 113” (id. ¶ 119). Moriya further discloses a “display condition determination means 90 for obtaining a lesion character described in an image reading report from lesion storage means 20, extracting a keyword corresponding to the lesion character, and determining the display condition from the extracted keyword” (id. ¶ 154), “and second medical image generation means 91 for generating a second medical image based on the determined display condition and storing the generated second medical image in medical image storage means 50” (id.). The display conditions described in paragraph 154 are stored in display condition table 94, which Appeal 2020-001464 Application 15/035,929 9 “includes display conditions associated with each keyword, such as VR color template, parameters of gradation and emphasis processing, window width of medical image, window level, image recognition processing using CAD and the like” (id. ¶ 160). Display condition table 94 is shown in Figure 19 of Moriya, reproduced below. Moriya further discloses: For example, if the lesion character is a character indicating blood vessel abnormality, display condition determination means 90 extracts “blood vessel” as the keyword from the keyword table, and determines the display condition “101” corresponding to the keyword “blood vessel” from display condition table 94a. Then, display condition determination means 90 outputs the determined display condition “101” and lesion character “blood vessel” to second medical image generation means 91. (id. ¶ 161). Accordingly, Moriya’s system generates a reconstructed medical image according to display conditions stored in display condition table 94, which contains display conditions associated with keywords that represent body parts (e.g., “blood vessel,” “liver”), the display conditions including, for example, display position information such as window width of the medical image to be generated. From the forgoing disclosure, a person of ordinary skill in the art would understand that when a user of Appeal 2020-001464 Application 15/035,929 10 Moriya’s system selects a thumbnail (117A, 117B, 121A) of an image corresponding to a reconstructed three-dimensional image of an organ having a lesion, the reconstructed three-dimensional image is displayed according to the associated display conditions (e.g., width level) stored in display condition table 94 in association with the organ extracted from the image information. Given that Moriya discloses receiving user input of lesion position, extracting organs, and then generating and displaying medical images based on user selection of a thumbnail and displaying the generated image according to display conditions stored in display condition table 94 associated with the extracted organ, we do not agree with Appellant that Moriya’s system is limited to merely extracting organs that belong to the position of the lesion area that is entered into the system. In view of the foregoing, we sustain the Examiner’s rejection of claim 1 under 35 U.S.C. § 102(a)(1). We also sustain the Examiner’s rejection of claims 2, 7–13, and 20, which fall with claim 1. Dependent Claim 3 Claim 3 depends from claim 1 and further recites “wherein displaying the image of the user selection includes displaying the image according to the image viewing context associated with the one of the image references, when the user selection is one of the image references associated with the image viewing context.” In other words, claim 3 requires that the image viewing context is actually used to display an image whereas claim 1 merely requires determining whether an image viewing context is associated with the user selection. Similar to the discussion above, the Examiner’s position is that Moriya discloses displaying the image according to the image viewing Appeal 2020-001464 Application 15/035,929 11 context associated with the one of the image references, when the user selection is one of the image references associated with the image viewing context (Final Act. 6). Appellant argues that “Moriya does not determine the associated context as recited in claim 1, and further does not display any image using an image viewing context that is related to a user selection of an image reference” as required by claim 3 (Appeal Br. 8–9). However, we are not persuaded of error in the Examiner’s rejection for substantially the same reasons discussed above. In view of the foregoing, we sustain the Examiner’s rejection of claim 3 under 35 U.S.C. § 102(a)(1). Obviousness Dependent Claims 4–6 and 14 Claim 4 depends from claim 1 and further recites “displaying the image of the user selection according to a default setting, when the user selection is not one of the image references associated with the image viewing context.” The Examiner relies on paragraph 157 of Moriya as disclosing this limitation (Final Act. 8; Ans. 9–10). According to the Examiner (Ans. 10), the claimed default setting for displaying images is equivalent to Moriya’s “user interface for selectively inputting anatomical structures and lesions when generating an image reading report” wherein “the names of anatomical structures and lesions are standardized” (Moriya ¶ 157). Appellant argues that paragraph 157 of Moriya does not disclose a default setting and that the Examiner’s rejection is based on impermissible hindsight (Appeal Br. 10; Reply Br. 7–8). Appeal 2020-001464 Application 15/035,929 12 We agree with Appellant that the cited portion of Moriya does not disclose a default setting for displaying images as called for in claim 4. The discussion of standardization in paragraph 157 of Moriya clearly relates to standardizing the names of anatomical structures and lesions and not to standardizing any display conditions for displaying images. Claims 5 and 6 depend from claim 4. Claim 14 depends from independent claim 11 and recites a similar limitation as claim 4 and is rejected on the same basis (Final Act. 8). In view of the foregoing, we do not sustain the Examiner’s rejection of claims 4–6 and 14 under 35 U.S.C. § 103(a). CONCLUSION In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–14, 20 112(a) Written Description 1–14, 20 1–3, 7–13, 20 102(a)(1) Moriya 1–3, 7–13, 20 4–6, 14 103(a) Moriya 4–6, 14 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED-IN-PART Copy with citationCopy as parenthetical citation