Ex Parte Ensel et alDownload PDFBoard of Patent Appeals and InterferencesSep 10, 201010042278 (B.P.A.I. Sep. 10, 2010) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 10/042,278 01/11/2002 Christian Ensel 1454.1212 5517 21171 7590 09/10/2010 STAAS & HALSEY LLP SUITE 700 1201 NEW YORK AVENUE, N.W. WASHINGTON, DC 20005 EXAMINER CHANKONG, DOHM ART UNIT PAPER NUMBER 2452 MAIL DATE DELIVERY MODE 09/10/2010 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES ____________________ Ex parte CHRISTIAN ENSEL and VOLKMAR STERZING ____________________ Appeal 2009-006326 Application 10/042,2781 Technology Center 2400 ____________________ Before JEAN R. HOMERE, DEBRA K. STEPHENS, and JAMES R. HUGHES, Administrative Patent Judges. HUGHES, Administrative Patent Judge. DECISION ON APPEAL2 1 Application filed January 11, 2002. The real party in interest is Siemens Aktiengesellschaft. (App. Br. 1.) 2 The two-month time period for filing an appeal or commencing a civil action, as recited in 37 C.F.R. § 1.304, or for filing a request for rehearing, as recited in 37 C.F.R. § 41.52, begins to run from the “MAIL DATE” (paper delivery mode) or the “NOTIFICATION DATE” (electronic delivery mode) shown on the PTOL-90A cover letter attached to this decision. Appeal 2009-006326 Application 10/042,278 2 STATEMENT OF THE CASE Appellants appeal from the Examiner’s rejection of claims 1-11, 13, 14, and 27-29 under authority of 35 U.S.C. § 134(a). Claims 12 and 15-26 have been canceled. The Board of Patent Appeals and Interferences (BPAI) has jurisdiction under 35 U.S.C. § 6(b). We affirm. Appellants’ Invention Appellants invented a device, medium, and method for monitoring of a telecommunication network and training of a statistical estimator for monitoring of a telecommunication network. (Spec. 1, ¶ [0002].)3 Representative Claim Independent claims 1 further illustrates the invention. It reads as follows: 1. A method for computer-aided monitoring of a telecommunication network formed of devices capable of communication, said method comprising: determining training activity parameters, each describing activity of at least one of a corresponding device and a corresponding service; determining possible dependences between devices and services from the training activity parameters; determining from the possible dependences a normal range of dependence for at least some of the devices and services in essentially undisturbed states to train a neural network as a statistical estimator; 3 We refer to Appellants’ Specification (“Spec.”); Appeal Brief (“App. Br.”) filed January 29, 2008; and Reply Brief (“Reply Br.”) filed June 23, 2008. We also refer to the Examiner’s Answer (“Ans.”) mailed April 21, 2008. Appeal 2009-006326 Application 10/042,278 3 determining current activity parameters, each describing activity of at least one of a corresponding device and a corresponding service; comparing the current activity parameters by the statistical estimator with the normal range of dependence; and determining from said comparing whether at least one of the devices and services in the telecommunication network has a communication performance different from the normal range of dependence in accordance with a predetermined criterion. References The Examiner relies on the following references as evidence of unpatentability: Waclawsky US 5,974,457 Oct. 26, 1999 Nittida Nuansri, Tharam S. Dillon & Samar Singh, An Application of Neural Network and Rule-Based System for Network Management: Application Level Problems, Proceedings of the Thirtieth Hawaii International Conference on System Sciences (HICSS) Vol. 5, 474-483 (January 1997) (hereinafter “Nuansri”). Rejection on Appeal The Examiner rejects claims 1-11, 13, 14, and 27-29 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Waclawsky and Nuansri. ISSUE Based on our review of the administrative record, Appellants’ contentions, and the Examiner’s findings and conclusions, the pivotal issue before us is as follows: Appeal 2009-006326 Application 10/042,278 4 Does the Examiner err in finding the Waclawsky and Nuansri references can properly be combined, and would have collectively taught or suggested “determining from the possible dependences a normal range of dependence for at least some of the devices and services in essentially undisturbed states to train a neural network as a statistical estimator” as recited in Appellants’ claim 1? FINDINGS OF FACT (FF) Appellants’ Specification 1. Appellants’ Specification describes a method for monitoring a telecommunication network, including: determining activity parameters describing activity between a device and a corresponding service, e.g., communication activity among devices or services over a communication network; determining possible dependences (relationships and/or network connections) between devices and services from the activity parameters; and determining from the possible dependences a normal range of dependence (normal/usual performance) for at least some of the devices and services in essentially undisturbed states (in normal operation (i.e., not under attack – see ¶¶ [0005], [0008]-[0010], [0018]-[0022])) to train a neural network as a statistical estimator. (¶¶ [0005]-[0010], [0018]-[0022], [0024], [0025], [0038]-[0041], [0045]-[0051], & [0077].) 2. The described devices may include computers (personal computers, workstations, laptops, and servers), printers and/or switches. (¶ [0006].) The described services may include “application programs in a state of execution[,] . . . for example, a web server, a file server, databases, [and] various application servers.” (¶ [0007].) The Specification explains Appeal 2009-006326 Application 10/042,278 5 that activity and activity parameters (training activity parameters) may include: the number of data packets transmitted or received by a device or service, “the processor utilization of the respective device,” and/or “the number of predetermined system function calls” by the device or service. (¶ [0050]; see ¶¶ [0020], [0039].) The Specification also explains that dependences are “communication-related dependences between the devices or services” (¶ [0040]) or “combinations of two devices or services coupled to one another in the telecommunication network” (¶ [0045]). (See ¶¶ [0038]-[0041], & [0045]-[0049].) The “normal range of dependence” is equivalent to the “normal performance” or “undisturbed performance” of the devices/services, i.e., a normal “communication performance” that is within “a predetermined range of tolerances.” (¶¶ [0018], [0021], [0022].) The Specification further explains that “[t]he statistical estimator is trained with the usual performance of the devices or services, that is to say with the normal range of dependence.” (¶ [0025]; see [0024]-[0025], [0038]-[0041], & [0045]-[0049].) In an embodiment of Appellants’ invention, the system stores the devices, services, and activity parameters in a database (matrix), and determines from the matrix the device/service dependences (pairs) and normal range of dependence (operation). (Spec. ¶¶ [0047]-[0049].) 3. Appellants do not explicitly define a “neural network” in their Specification. The Specification, however, does describe the functionality of the neural network – the neural network “makes it possible to model both local relationships and global relationships of the communication performance of the respective pair of devices” (¶ [0077]) – as well as an exemplary neural network. (¶¶ [0071]-[0077]; Fig. 2.) Appellants define a “statistical estimator” as “a basically arbitrary neural model, that is to say a Appeal 2009-006326 Application 10/042,278 6 neural network, or else a neuro-fuzzy model, which is trained by known training methods.” (¶ [0051].) Waclawsky Reference 4. Waclawsky describes: [A] system and method to enable real-time establishment and maintenance of a standard of operation for a data communications network. The standard is a data set which includes network activity which is historically categorized by traffic type and by activity. The process begins with monitoring the network media or some network component over some period of time. The monitoring information is used to build benchmark data sets. The benchmark data sets contain a standard of operation for the network, which are historically categorized by either traffic type or activity. This standard of operation is constantly built by the intelligent monitoring facilities. After some period of time which is referred to as the benchmark data set refresh interval, the benchmark that was created is employed in a fashion to allow a determination as to whether the data that is taken from the current monitoring activity indicates normal network behavior. (Abstract.) Waclawsky further describes the operation of its expert system (the expert analysis module) and intelligent monitoring system (the intelligent realtime monitor) – the expert system compares communication network data inputs (event vectors) to network behavior standards (rules) for determining if the communication network data falls within normal parameters, and the intelligent monitoring system produces benchmark data sets that may be substituted for the standards used by the expert system: [T]he expert analysis module 160, will have the information represented by the event vectors compared with standards of behavior for the network. The comparison is performed by the rules contained in the rule based criteria modules 150. The standards can be predetermined, predefined standards such as Appeal 2009-006326 Application 10/042,278 7 average utilization for particular types of traffic such as batch traffic, interactive traffic, voice traffic or video traffic. Another important type of standard is the benchmark data set which is the accumulated history of behavior of traffic on the network, as it has been monitored by the system shown in FIGS. 1A and 1B. The benchmark data sets 110, in accordance with the invention, can progressively accumulate a more accurate representation of the expected behavior for the network and that standard can be substituted for the predetermined standard used by the rules in the rule based criteria modules 150. (Col. 4, l. 56 to col. 5, l. 4; see Figs. 1A-1B.) 5. Waclawsky explains that the expert system compares the communication network data against predetermined standards and/or benchmark data sets to produce outputs (inference signals) that represent the behavior of the network or a particular user defined class of communication traffic – such as particular nodes, packet sizes, and/or types of traffic (batch, video, or audio/voice traffic) – and may be used to control the network. (Col. 5, ll. 5-30.) Waclawsky also describes that the benchmark data set may describe a range of ordinary network behavior that falls within certain limits. (Col. 10, l. 54 to col. 11, l. 23.) For example, Waclawsky describes determining and comparing benchmark triggers (peak utilization range and frame/second range) (See Fig. 10B). Nuansri Reference 6. Nuansri describes a hybrid network management system that includes a neural network and a rule-based system for monitoring and diagnosing problems in a communication network, in particular, at an application level. (p. 474, Abstract & Sec. 1.) Nuansri’s neural network (BRAINNE) has the capability to “learn complex, non-linear functions.” (p. 475, para. 1; see p. 477, Sec. 5.) The rule-based expert system monitors, Appeal 2009-006326 Application 10/042,278 8 for example, the domain name system (DNS) – which utilizes name servers and a distributed database to provide DNS services (resolve names and IP addresses) (p. 475) – and the neural network (BRAINNE) acquires knowledge concerning, for example, application (DNS) errors and the causes of these errors. (pp. 475, Sec. 2; 477-78, Sec. 5.) The neural network (BRAINNE) creates rules and provides these to the expert system, and the rule-based expert system analyzes and diagnoses problems in the communication system. (p. 478, para. 1.) ANALYSIS Appellants contend that the Waclawsky and Nuansri references “taken alone or in combination, fail to describe ‘determining from the possible dependences a normal range of dependence for at least some of the devices and services essentially undisturbed states to train a neural network as a statistical estimator’ as recited by claim 1” (Reply Br. 1). (See App. Br. 8- 10; Reply Br. 1-3.) Appellants also contend that the Examiner’s cited combination of the Waclawsky and Nuansri references is improper “because there is insufficient evidence for a motivation to use the method of Nuansri et al. in the system described by Waclawsky et al.” (App. Br. 11), and that Waclawsky “teaches away from a statistical estimator training system” (App. Br. 9). (See App. Br. 9, 10-11; Reply Br. 3.) The Examiner finds that the Waclawsky and Nuansri references teach each feature of Appellants’ claim 1 and maintains that the claim is properly rejected as obvious over the reference combination. (Ans. 3-5, 8-12.) Specifically, the Examiner finds that: Waclawsky describes “determining a normal range of dependence” (Ans. 9), and “training a rule-based network as a statistical estimator” (Ans. Appeal 2009-006326 Application 10/042,278 9 10); Nuansri describes a hybrid neural network and rule-based network combination, as well as training the neural network (Ans. 10); and that one skilled in the art at the time of Appellants’ invention would have been motivated to make the cited reference combination (Ans. 11-12). Based on these contentions, we decide the question of whether the Examiner erred in finding the Waclawsky and Nuansri references can properly be combined, and would have collectively taught or suggested the disputed feature of determining from the possible dependences a normal range of dependence for at least some of the devices and services in essentially undisturbed states to train a neural network as a statistical estimator. After reviewing the record on appeal, we agree with the Examiner that a skilled artisan could have properly combined the Waclawsky and Nuansri references and that the combination would have taught the disputed feature. The dispute before us hinges on the disagreement of the Examiner and Appellants as to what constitutes “possible dependences” (between devices and services), “a normal range of dependence,” “a neural network,” and “a statistical estimator,” and the interpretation of these terms is critical to resolving this dispute. Thus, we begin our analysis by construing Appellants’ disputed claim limitation. We give claim terminology the “broadest reasonable interpretation consistent with the [S]pecification” in accordance with our mandate that “claim language should be read in light of the [S]pecification as it would be interpreted by one of ordinary skill in the art.” In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004) (citations omitted). Appeal 2009-006326 Application 10/042,278 10 Appellants’ claim 1 in relevant part recites: determining training activity parameters, each describing activity of at least one of a corresponding device and a corresponding service; determining possible dependences between devices and services from the training activity parameters; [and] determining from the possible dependences a normal range of dependence for at least some of the devices and services in essentially undisturbed states to train a neural network as a statistical estimator. (App. Br. 12, Claim App’x., claim 1 (emphasis added to the limitation in dispute).) Appellants describe “dependences” as “communication-related” relationships between computer devices (or peripherals), and/or services such as databases and application programs executed by servers. Appellants’ system determines the dependences from communication network activity (activity parameters) including traffic volume (data packets transmitted/received), processor utilization, and system function calls. Appellants explicitly describe the “normal range of dependence” as equivalent to the normal or usual communication performance of a particular device/service or device/service pair. (FF 1-2.) Appellants explicitly define a “statistical estimator” as “a basically arbitrary neural model, that is to say a neural network, or else a neuro-fuzzy model, which is trained by known training methods.” Appellants do not explicitly define a “neural network” either in their claim or Specification. (FF 3.) We understand a neural network to be a system of programs and data structures that approximates the operation of a biological neural network (the human brain). A neural network typically “learns” (is trained) by receiving weighted inputs (data and rules about data relationships). A neural network adapts with time and repetition to produce appropriate outputs, i.e., it “learns” from examples. See Microsoft Computer Dictionary, Fifth Edition (May 2002); Free On-line Appeal 2009-006326 Application 10/042,278 11 Dictionary of Computing (“FOLDOC”), available at http://foldoc.org. We also understand a “statistical estimator” to be a device or program, in this instance a neural network, that performs the function of providing an estimate (approximate calculation) based on statistical analysis. Although ultimately unnecessary for our construction of the disputed claim limitation, we also note it is known within the art (as argued at length by Appellants – see App. Br. 7) that an expert system uses sets of rules and data to produce a decision or recommendation. Neural networks, on the other hand, attempt to simulate the human brain by collecting and processing data for the purpose of “learning,” and adapt their analyses criteria through the learning process. One of the generally recognized differences between an expert system and a neural network is that a neural network can adapt its criteria to better match the data it analyzes, while an expert system typically produces results without adjusting for changes in the analyzed data. See generally, Microsoft Computer Dictionary; FOLDOC; App. Br. 7-8. Based on Appellants’ disclosures, we broadly but reasonably construe Appellants’ disputed claim limitation to mean determining, based on communication network activity, a normal communication performance or range of performance, and providing the normal performance as an input to a neural network – a system of programs and data structures that learns and adapts with time and repetition – to train the neural network to provide an estimate based on statistical analysis (function as a statistical estimator). This construction is consistent with the cited references, Appellants’ Specification, and the knowledge of those skilled in the art at the time of Appellants’ invention. Appeal 2009-006326 Application 10/042,278 12 As detailed in the Findings of Fact section supra, the Waclawsky reference describes a system for determining a standard of operation for a data communications network, portions of the network, and/or particular nodes, types of communication traffic, or activities in the network. Waclawsky’s system monitors portions of the network, such as particular communications links or components to build benchmark data sets that the system utilizes as the standard of operation, in addition to predefined rules, to determine if the current operation of the network is behaving normally – that is, within a normal range of network behavior. (FF 4-5.) Waclawsky’s benchmark data sets also describe a range of ordinary network behavior that falls within certain limits, for example, benchmark triggers including peak utilization range and frame/second range. (FF 5.) Accordingly, Waclawsky describes monitoring communication network activity between network devices to produce benchmark data, and an adaptive expert system that adapts its evaluation criteria to include benchmark data sets for a range of ordinary activity. This range of normal activity is equivalent to an estimate of normal activity produced by statistical analysis. Thus, we find Waclawsky teaches determining, based on communication network activity, a normal communication performance or range of performance, and providing the normal performance as an input to an adaptive expert system (a system of programs and data structures that learns and adapts with time and repetition) which functions as a statistical estimator. We also find Waclawsky teaches training its adaptive expert system utilizing the benchmark data. The Nuansri reference describes a hybrid communication network management system including a neural network and a rule-based system for Appeal 2009-006326 Application 10/042,278 13 monitoring and diagnosing problems in a communication network, in particular, at an application level. Nuansri’s neural network (BRAINNE) and rule-based expert system monitors communication devices and services, for example, the domain name system (DNS) which utilizes servers and a distributed database to provide DNS services. The neural network creates evaluation rules and/or criteria, and provides these to the rule-based expert system, which analyzes and diagnoses problems in the communication system. (FF 6.) Accordingly, we find Nuansri teaches monitoring communication activity between network devices and/or services, a hybrid system including a neural network and an expert system, and training a neural network utilizing monitored communication network activity. We find Appellants’ contrary arguments unpersuasive. Specifically, Appellants mischaracterize Waclawsky, Nuansri, and the combination of Waclawsky and Nuansri. Specifically, Appellants mischaracterize Waclawsky as merely describing a rule-based system that teaches away from a neural network (App. Br. 9; Reply Br. 2, 3), and further state that Waclawsky does not “describe how the ‘benchmark data set’ could be used as a ‘normal range of dependence’” (App. Br. 10). Appellants mischaracterize Nuansri’s neural network as merely a “pre-processor.” (App. Br. 8, 9; Reply Br. 2.) We find (supra) that Waclawsky teaches a normal range of network performance, and Nuansri teaches training a neural network utilizing network communication data. Thus, we find the cited reference combination of Waclawsky and Nuansri teaches Appellants’ disputed claim limitation as recited in Appellants’ independent claim 1. Appellants do not separately argue dependent claims 2-11, 13, and 14 (dependent on claim 1), or independent Appeal 2009-006326 Application 10/042,278 14 claims 27, 28, and 29. Accordingly, we select independent claim 1 as representative of claims 2-11, 13, 14, and 27-29, and we find the Waclawsky and Nuansri references render these claims obvious for the reasons set forth with respect to representative claim 1. It follows that Appellants do not persuade us of error in the Examiner’s obviousness rejection of claims 1-11, 13, 14, and 27-29, and we affirm the Examiner’s rejection of these claims. CONCLUSIONS OF LAW Appellants have not shown that the Examiner erred in rejecting claims 1-11, 13, 14, and 27-29 under 35 U.S.C. § 103(a). DECISION We affirm the Examiner’s rejection of claims 1-11, 13, 14, and 27-29 under 35 U.S.C. § 103(a). No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED msc Staas & Halsey LLP Suite 700 1201 New York Avenue, N.W. Washington, DC 20005 Copy with citationCopy as parenthetical citation