Ex Parte Nagata et alDownload PDFPatent Trial and Appeal BoardSep 4, 201813820407 (P.T.A.B. Sep. 4, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 13/820,407 03/01/2013 Kazumi Nagata 22850 7590 09/06/2018 OBLON, MCCLELLAND, MAIER & NEUSTADT, L.L.P. 1940 DUKE STREET ALEXANDRIA, VA 22314 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 412823US2PCT 4135 EXAMINER HOSSAIN, FARZANA E ART UNIT PAPER NUMBER 2482 NOTIFICATION DATE DELIVERY MODE 09/06/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patentdocket@oblon.com OBLONPAT@OBLON.COM tfarrell@oblon.com PTOL-90A (Rev. 04/07) UNITED ST ATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte KAZUMI NAGATA, T AKAAKI ENOHARA, KEN JI BABA, SHUHEI NODA, and NO BUT AKA NISHIMURA Appeal2018-000920 Application 13/820,407 Technology Center 2400 Before JOHN A. JEFFERY, BRUCE R. WINSOR, and JUSTIN BUSCH, Administrative Pa tent Judges. WINSOR, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellants 1 appeal from the Examiner's decision to reject claims 20, 21, 23-31, and33-38, which constitute all the claims pending in this application. We have jurisdiction under35 U.S.C. § 6(b). Claims 1-19, 22, and 32 have been cancelled. We affrrm. 1 Appellants identify the real party in interest as "Kabushiki Kaisha Toshiba." Br. 1. "Kabushiki Kaisha Toshiba" is the Applicant for the instant patent application. Bib. Data Sheet. Appeal2018-000920 Application 13/820,407 STATEMENT OF THE CASE Appellants' disclosed invention relates to sensing a predetermined space's state captured by an image based on a mask region and a detection region of the image. Spec., Abstract, 2:21-33. Claim 20, which is illustrative, reads as follows: 20. An image sensor system comprising: an image capturing unit that captures an image of a predetermined space; an image acquiring unit that acquires the image captured by the image capturing unit; a mask region deriving unit that derives, by using the image acquired by the image acquiring unit, a mask region not to be sensed from the image; a detection region deriving unit that derives, by using the image acquired by the image acquiring unit, a detection region as a sensing target from the image; a retaining unit that retains the mask region and the detection region as setting information; a sensing unit that senses a state of the space from the image acquired by the image acquiring unit based on the setting information retained in the retaining unit; and an action acquiring unit that acquires a feature amount of each region of the image, corresponding to a numerical value of an action content of a person in the space, from an image within a predetermined period acquired by the image acquiring unit, wherein the action acquiring unit calculates an action amount in each region by identifying the action content of the person in the space from the feature amount, and calculates an occurrence frequency of each action, and 2 Appeal2018-000920 Application 13/820,407 the detection region deriving unit derives a frrst region based on the occurrence frequency, classifies the frrst region for every predetermined type based on content of the action amount, and determines the classified frrst region as the detection region classified for every predetermined type. Claims 20, 21, 25, 26, 28, 30, 31, and35-38 stand rejected under 35 U.S.C. § 103(a)2 as being unpatentable over Sablak (WO 2011/002775 Al, published Jan. 6, 2011) and Ueno et al. (US 2003/0117279 Al, published June 26, 2003). See Final Act. 4--15. Claims 23, 24, 27, 33, and 34 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Sablak, Ueno, and Weinmann et al. (US 2010/0214411 Al, published Aug. 26, 2010). See Final Act. 16-17. Claim 29 standsrejectedunder35 U.S.C. § 103(a)as being unpatentable over Sablak, Ueno, and Takano et al. (US 5,850,254, issued Dec. 15, 1998). See Final Act. 17-18. Rather than repeat the arguments here, we refer to the Brief ("Br." filed June 12, 2017) for Appellants' positions; the Final Office Action ("Final Act." mailed Oct. 4, 2016) and Examiner's Answer ("Ans." mailed Aug. 18, 2017) for the reasoning, findings, and conclusions of the Examiner; and the Specification ("Spec." filed Mar. 1, 2013). Only those arguments actually made by Appellants have been considered in this decision. Arguments that Appellants did not make in the Brief have not been considered and are deemed to be waived. See 37 C.F.R. § 4I.37(c)(l)(iv) (2016). 2 All rejections areunderthe provisions of35 U.S.C. in effect before the effective date of the Leahy-Smith America Invents Act of2011 ("pre-AIA"). Final Act. 2. 3 Appeal2018-000920 Application 13/820,407 ANALYSIS We begin by noting the Examiner's obviousness rejection relies principally on Sablak for teaching many of the recited elements of claim 20. Final Act. 4---6. In particular, the Examiner fmds Sablak teaches an action acquiring unit that "acquires a feature amount of each region of the image, corresponding to a numerical value of an action content of a person in the space, ... [ and] calculates an action amount in each region by identifying the action content of the person in the space from the feature amount" (hereinafter "the action acquiring unit limitation"), as recited in claim 20. Final Act. 5 (citing Sablak ,r,r 35, 45, 72, 81); Ans. 17. Appellants assert Sablak's video surveillance system tracks a motion of objects in a region, and creates a motion density map to identify areas of interest. Br. 4 ( citing Sablak ,r 72; Fig. 6). Appellants further assert Sablak teaches distinguishing between a motion of two objects, id. at 5, but argues Sablak does not teach the action acquiring unit limitation, id. at 4--5. In particular, Appellants argue Sablak does not calculate action content. Id. at 5. We construe the recited terms of the action acquiring unit limitation of claim 20 including "action content," "feature amount," and "action amount." Claim construction is an issue of law that is reviewed de nova. Cordis Corp. v. Boston Scientific Corp., 561 F.3d 1319, 1331 (Fed. Cir. 2009). We give claims their broadest reasonable interpretation consistent with the Specification. In re Am. Acad. of Sci. Tech. Ctr., 367 F.3d 1359, 1369 (Fed. Cir. 2004). The Specification is not a model of clarity regarding these recited terms. Notably, the Specification discloses the term "feature amount" as 4 Appeal2018-000920 Application 13/820,407 "corresponding to a numerical value of an action of the person .... Herein, the feature amount is, for example, an action amount." Spec. 12:7-9. The action acquiring unit limitation, however, recites no such identity between the feature amount and action amount, but instead calculates an action amount using the feature amount. Moreover, the Specification discloses the term "action amount" as being calculated using action contents obtained from an accumulative differential image. Spec. 12:28-31. The Specification, however, further discloses the accumulative differential image is generated using an action amount. Spec. 12:12-17 (disclosing "when the action amount is acquired ... , a difference (differential image) between the image is extracted ... , thereby generating an accumulative differential image."). The Specification further discloses the term "action content" as being identified from a feature amount using an identification model. Spec. 12:25-28. Nevertheless, although the Specification does not explicitly define these recited terms, or set forth a clear relationship or delineation between these recited terms, the Specification informs our understanding of the recited terms. Each of the recited terms is a compound term with no ordinary meaning apart from the ordinary meaning of the individual words that make up the compound terms. We, therefore, construe the recited terms in accordance with their ordinary meanings. Notably, the term "action" is defined as "[a] movement," THE AMERICAN HERITAGE DICTIONARY OF THE ENGLISH LANGUAGE 17 (n. def. 3) (3rd ed. 1992); the term "content" is defined as "something contained," id. at 407 ( n. def. 1 ); the term "amount" is defined as a"[ q]uantity," id. at 62 (n. def. 5); and the term "feature" is defined as "a prominent or distinctive aspect, quality, or characteristic," id. 5 Appeal2018-000920 Application 13/820,407 at 668 (n. def. 2). Each of these ordinary meanings corresponds with the description of the invention in the Specification. See In re Smith Int' l, Inc., 871 F.3d 1375, 1382-83 (Fed. Cir. 2017). Therefore, undertheirbroadest reasonable interpretation, the term "action content" encompasses the movement of a particular person or thing, or in a particular area, the term "action amount" encompasses a quantity of movement, and the term "feature amount" encompasses a quantity of an aspect, quality, or characteristic, which may include, but is not limited to, action (see Spec. 12:7-9). Turning to Appellants' contentions, we see no error in the Examiner's fmding that Sablak teaches action content. Final Act. 5 ( citing Sablak ,r,r 35, 45, 72, and81); Ans. 17. Sablakis generally directed to a video surveillance system. Sablak, Abstract. Sablak's video surveillance system uses video input to create a motion density map based on movements of people within a region. Id. ,r 19. Sablak's motion density map is created by continuously updating motion pixels over a period of time, id. ,r 69, such as by using an averaging function between a number of video frames, id. ,r,r 36, 70-71. Thus, despite Appellants' argumentto the contrary, Br. 4, 6, Sablak's motion density map at least suggests identifying movement of a particular person or in a particular area ( the recited "action content") from an image. Sablak's system further selects the densest areas of the motion density map as possible areas of interest using specified area thresholds. Sablak ,r 72; Fig. 6. Thus, contrary to Appellants' arguments that Sablak does not teach or suggest the action acquiring unit limitation, Br. 4--5, Sablak at least suggests acquiring aspect, quality, or characteristic quantities (the claimed "feature amount") of each area of interest ( the claimed "region") of a video image that corresponds to a numerical value of a movement ( the claimed 6 Appeal2018-000920 Application 13/820,407 "action content") of a person in the space, and calculating densest areas ( the claimed "action amount") in each area of interest by identifying the motion density map of a person in a space from the acquired quantities of shapes. The Examiner further finds, although Sablak teaches determining an occurrence ofan action in an image, Final Act. 5 (citing Sablak ,r,r 35, 45, 72, 81 ), Sablak does not teach calculating an occurrence frequency of each action, id. Moreover, the Examiner fmds, although Sablak teaches deriving a frrst region in the image, id. (citing Sablak ,r,r 35, 45, 72, 81), Sablak does not teach such deriving is based on the occurrence frequency, id. at 5---6. The Examiner, however, cites Ueno for teaching this limited feature in concluding that the claim would have been obvious. Id. at 6 ( citing Ueno ,r,r 78, 80, 81; Fig. 3). We see no error in the Examiner's reliance on Ueno merely for this limited purpose. Therefore, Appellants' argument that Ueno obtains no information from an image, much less derives a region in an image from an occurrence frequency, Br. 5, is unpersuasive where, as here, the rejection is not based solely on Ueno, but rather on the collective teachings of Sablak and Ueno---the former obtaining information from an image such as deriving a frrst region in the image. Therefore, Appellants' arguments regarding Ueno' s individual shortcomings in this regard do not show nonobviousness where, as here, the rejection is based on the cited references' collective teachings. See In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986). Nor do we fmd availing Appellants' contention that [ o ]ne skilled in the art would also not combine Sablak and Ueno to obtain the system of claim 20 or the apparatus of claim 30 since Sablak is based on video surveillance while Ueno is 7 Appeal2018-000920 Application 13/820,407 Br. 6. based upon motion detectors and specifically does not use cameras. The use of a camera is not mentioned anywhere in Ueno .... The systems ofSablak and Ueno are designed differently, function differently, and will not work together based upon imaging. The test of obviousness is not whether one reference can be bodily inserted into another, but, "[r ]ather, the test is what the combined teachings of the references would have suggested to one of ordinary skill in the art," In re Keller, 642 F.2d413, 425 (CCPA 1981), who is a person of ordinary creativity and not an automaton, KSR Int' 1 Co. v. Teleflex Inc., 550 U.S. 398, 421 (2007), and whose inferences and creative steps we may consider, id. at 418. On this record, we see no reason why combining Sablak' s action amount calculations from captured images with Ueno' s occurrence frequency calculation for each action would have been "uniquely challenging or difficult for one of ordinary skill in the art." Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1162 (Fed. Cir. 2007) (citing KSR, 550 U.S. at419); see also Keller, 642 F.2dat 425 ("To justify combining reference teachings in support of a rejection it is not necessary that a device shown in one reference can be physically inserted into the device shown in the other."). The arguments presented by Appellants do not persuade us of error in the rejection of claim 20. See Ex parte Frye, 94 USPQ2d 1072, 1075 (BP AI 2010) (precedential). Accordingly, we sustain the rejections of (1) independent claim 20; (2) independent claims 30, 37, and 38, which are argued relying on the arguments made for claim 20 (see Br. 4--8); and (3) claims 21, 23-29, 31, and33-36, which depend, directly or indirectly, 8 Appeal2018-000920 Application 13/820,407 from claims 20 and 30, and were not separately argued with particularity (see id. at 8). CONCLUSION The Examiner did not err in rejecting claims 20, 21, 23-31, and 33-38 under§ 103. DECISION The Examiner's decision to reject claims 20, 21, 23-31, and33-38 is affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l ). See 37 C.F.R. §§ 41.50(±), 4I.52(b). AFFIRMED 9 Copy with citationCopy as parenthetical citation