Ex Parte Cobb et alDownload PDFPatent Trial and Appeal BoardMar 14, 201813839587 (P.T.A.B. Mar. 14, 2018) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/839,587 03/15/2013 Wesley Kenneth COBB OMAI-002/03US 330076-2089 3417 58249 7590 03/16/2018 COOLEY LLP ATTN: Patent Group 1299 Pennsylvania Avenue, NW Suite 700 Washington, DC 20004 EXAMINER BILLAH, MASUM ART UNIT PAPER NUMBER 2486 NOTIFICATION DATE DELIVERY MODE 03/16/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): zpatdcdocketing@cooley.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte WESLEY KENNETH COBB, MING-JUNG SEOW, GANG XU, KISHOR ADINATH, ANTHONY AKINS, KERRY JOSEPH, and DENNIS G. URECH Appeal 2017-008305 Application 13/839,587 Technology Center 2400 Before DENISE M. POTHIER, JUSTIN BUSCH, and JASON M. REPKO, Administrative Patent Judges. BUSCH, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellants appeal from the Examiner’s decision to reject claims 1—21, which constitute all the claims pending in this application. We have jurisdiction over the pending claims under 35 U.S.C. § 6(b). We reverse. CLAIMED SUBJECT MATTER Appellants’ invention generally “relate[s] to configuring a behavioral recognition-based video surveillance system to generate alerts for certain events.” Spec. 12. More specifically, Appellants’ invention allows “a behavioral recognition system to identify events that should always or never Appeal 2017-008305 Application 13/839,587 result in an alert without impeding the unsupervised learning process of the surveillance system.” Id. Appellants’ claimed system and methods are directed to aspects of a behavioral recognition video surveillance system, which is a system that learns objects and behaviors over time rather than relying on predefined rules and patterns. Id. 3^4. Appellants’ invention provides the ability to override learned alert behavior—i.e., to specify certain behaviors that should always or never result in an alert. Id. Tflf 5—6. Claims 1, 8, and 15 are independent claims. Claim 1 is illustrative and reproduced below: 1. A method for processing events generated from an evaluation of a stream of video frames, the method comprising: obtaining characteristic values for an observed event in a scene depicted by the stream of video frames; updating a learned state of the scene based on the characteristic values, wherein the learned state provides a model of patterns of behavior generated from evaluating a plurality of foreground objects detected in the stream of video frames; parsing a list of alert directives for a matching alert directive having ranges of criteria values, wherein the characteristic values are within the ranges of the criteria values and wherein the alert directive overrides a decision to either publish an alert or to not publish an alert for the observed event based on the updated learned state of the scene without changing the updated learned state of the scene; and upon identifying the matching alert directive, either publishing the alert or not publishing the alert according to alert directive. REJECTION Claims 1—21 stand rejected under AIA 35 U.S.C. § 103(a) as obvious in view of Venetianer (US 2005/0162515 Al; July 28, 2005) and Xu (US 2008/0181453 Al; July 31, 2008). Final Act. 2-11. 2 Appeal 2017-008305 Application 13/839,587 ANALYSIS The Examiner rejects claims 1—21 as obvious in view of the combined teachings of Venetianer and Xu. Final Act. 2—11. Of particular note, the Examiner finds Venetianer does not disclose the updating and parsing steps, but finds Xu teaches or suggests the updating and parsing steps. Id. at 4—5. The Examiner provides a rationale for combining the relevant teachings of Venetianer and Xu, concluding the claimed subject matter would have been obvious in view of the proposed combination. Id. at 6. Venetianer is generally directed to a video surveillance system that uses event discriminators to extract event occurrences from video primitives and generate a response, such as an alarm, based on the event occurrences. Venetianer, Abstract. Venetianer allows an operator monitoring a location to enter event discriminators based on video primitives that may be evaluated alone or with other event discriminators to determine whether to trigger a response, such as activating alerts, locking a door, and activating another surveillance system. Id. Tflf 75—76, 97—98, 120, 137. Xu is directed to a video surveillance system for tracking objects using appearance models for blobs (a group of pixels that may be associated with a particular object) to maintain object identity during an occlusion event (when the two blobs overlap). Xu, Abstract. Xu uses the appearance models to segment a group blob resulting from an occlusion event into regions that may be identified with the particular blobs involved in the occlusion event. Id. Xu depicts and describes the stages “of a known intelligent video system.” Xu 125, Fig. 1. Specifically, the disclosed known system includes nine stages, grouped into three main blocks: object segmentation, robust 3 Appeal 2017-008305 Application 13/839,587 tracking, and object classification. Id. 13, Fig. 1. Xu’s system replaces the known robust tracking block with its own matching process. Id. 39, 49, Fig. 3. In the first stage of the object segmentation block, the system learns a background model based on an initial video segment. Xu 14. “The purpose of this stage 7 is to establish a background model from an initial segment of video data” and the background model ideally has no visible foreground objects. Id. 141. Xu’s second stage of object segmentation is background subtraction, which compares each pixel of a current frame to the background model to estimate whether the pixel represents the background or a foreground object and may dynamically update small changes in the background model (whereas “more severe or sudden changes may require a relearning operation”). Id. 142. Next, Xu implements a false-foreground suppression stage, “attempts to alleviate false detection problems caused by noise and camera jitter,” followed by a shadow/highlight removal stage and the constrained component analysis (CCA) stage, which is the final stage of object segmentation and “groups all pixels presumably belonging to individual objects into respective blobs.” Id. Tflf 45—48. Xu distinguishes between blobs, which it describes as newly detected foreground elements or regions, and objects, which are foreground elements or regions that are being tracked. Id. 149. Xu’s matching process begins by analyzing each blob and assigning it one of four “attention levels” based on predefined rules. Xu 1 50. Subsequent processing of each blob depends on the attention level assigned to that blob. Id. The attention levels indicate the status of a blob with respect to potential occlusion with another blob. More particularly, attention 4 Appeal 2017-008305 Application 13/839,587 level 1 is assigned to blobs that are not overlapping or close enough to the nearest neighbor to be imminently approaching an occlusion event. Id.^ 51. If a blob is not overlapping, but within a proximity threshold of a neighbor, it is assigned attention level 2, whereas attention level 3 indicates occlusion is taking place, and attention level 4 indicates occluded objects are separating. Id. Tflf 52—61. When blobs are about to occlude (i.e., the blobs are assigned attention level 2), Xu creates or updates an appearance model that is used when blobs are assigned attention levels 3 and 4 to perform more accurate blob detection and segmentation. Id. 82—86, 90—99. Appellants contend Xu does not teach or suggest either the updating step or the parsing step recited in the independent claims. Br. 9—11. In particular, with respect to the parsing step, Appellants assert the cited portion of Xu addresses a well-known video analytics problem of occluding foreground objects, which Xu addresses by changing how appearance models are updated to prevent one object from contaminating the other. Id. at 11. Appellants argue the appearance model updating described by Xu simply indicates sensitivity to interacting foreground objects but in no way teaches or suggests “a step of overriding whether alert messages are published for certain behavioral events observed for certain foreground objects.” Id. The Examiner responds that Appellants’ argument “fails to account for the combination of Venetianer and Xu as a whole.” Ans. 4. The Examiner finds Xu teaches analyzing incoming frames to identify blobs and assign each blob an attention level that determines further processing steps. Id. at 5 (citing Xu 150). The Examiner further finds Xu “teaches that match decision stage 53 created or updated for the relevant blobs depends on 5 Appeal 2017-008305 Application 13/839,587 whether or not a match is made,” and Xu’s appearance model “comprises a color histogram indicating the frequency (i.e. number of pixels) of each color level that occurs within that blob.” Id. at 4—5. The Examiner then states “[o]nce attention manager assigned one of four possible levels based on predefined rules, it would be possible to publish or not publish an alert with the help of current system by recalling those assigned levels.” Id. at 5. Finally, the Examiner finds Venetianer discloses that real-time video primitive extraction enables the system to generate real-time alerts. Id. at 6 (citing Venetianer || 97, 119, 130). Thus, the Examiner concludes, the parsing step, and specifically the recited “wherein the alert directive overrides a decision to either publish an alert or to not publish an alert for the observed event based on the updated learned state of the scene without changing the updated learned state of the scene,” would have been obvious in view of Venetianer and Xu. Appellants do not dispute that Venetianer teaches generating system alerts or that Xu teaches creating or updating appearance models for blobs assigned attention level 2. Rather, Appellants argue Xu’s attention levels and appearance models, discussed in paragraph 82 and cited by the Examiner, do not “teach, or in any way suggest” the recited alert directives that override a decision on whether or not to publish an alert. Br. 11. We agree with Appellants that the Examiner’s rejection fails to demonstrate or explain sufficiently how the proposed combination renders the claimed subject matter obvious. In particular, Xu’s appearance models are not related to alerts, let alone overriding a decision on whether or not to publish an alert. 6 Appeal 2017-008305 Application 13/839,587 The Examiner’s explanation that, after determining an attention level, “it would be possible to publish or not publish an alert with the help of current system by recalling those assigned levels” is similarly deficient. Ans. 5. Even to the extent that it would be possible to publish or not publish an alert based on the assigned attention levels, the Examiner has not explained adequately a link between the assigned attention levels and an alert, or how the assigned attention levels would suggest to an ordinarily skilled artisan to override the decision to publish or not publish an alert taught by Venetianer. See id. Xu’s attention levels are assigned and used to indicate a blob’s relationship to occlusion events. Xu Tflf 50-61. Xu is concerned with robustly dealing with occlusions and providing improved segmentation of occluded blobs in order to better maintain individual object identity. Id. 8—9, 123. Based on the record, the Examiner has not demonstrated Venetianer and Xu teach or suggest alert directives that override a decision to publish or not publish an alert. Nor has the Examiner articulated a sufficient reason with a rational underpinning to modify the combined teachings to arrive at Appellants’ claimed subject matter. For the reasons discussed above, we are persuaded the Examiner erred in finding the combination of Venetianer and Xu teaches or suggests the “parsing” limitation, as recited in independent claims 1, 8, and 15. Because our determination is dispositive of this appeal, we do not address Appellants’ other arguments. Accordingly, we do not sustain the Examiner’s rejection of independent claims 1, 8, and 15. For similar reasons, we also do not sustain the Examiner’s rejections of claims 2—7, 9—14, and 16—21, which depend from claims 1, 8, and 15, respectively. 7 Appeal 2017-008305 Application 13/839,587 DECISION We reverse the Examiner’s decision to reject claims 1—21 under 35 U.S.C. § 103(a). REVERSED 8 Copy with citationCopy as parenthetical citation