Ex Parte Fisher et alDownload PDFBoard of Patent Appeals and InterferencesSep 21, 201010672777 (B.P.A.I. Sep. 21, 2010) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 10/672,777 09/26/2003 Bradford Austin Fisher RSW920030123US1 (111) 9674 46320 7590 09/22/2010 CAREY, RODRIGUEZ, GREENBERG & PAUL, LLP STEVEN M. GREENBERG 950 PENINSULA CORPORATE CIRCLE SUITE 2022 BOCA RATON, FL 33487 EXAMINER BELANI, KISHIN G ART UNIT PAPER NUMBER 2443 MAIL DATE DELIVERY MODE 09/22/2010 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES ____________________ Ex parte BRADFORD AUSTIN FISHER and RANDY ALLAN RENDAHL ____________________ Appeal 2009-006321 Application 10/672,7771 Technology Center 2400 ____________________ Before JOHN A. JEFFERY, JAY P. LUCAS, and JAMES R. HUGHES, Administrative Patent Judges. HUGHES, Administrative Patent Judge. DECISION ON APPEAL2 1 Application filed September 26, 2003. The real party in interest is International Business Machines Corp. (App. Br. 1.) 2 The two-month time period for filing an appeal or commencing a civil action, as recited in 37 C.F.R. § 1.304, or for filing a request for rehearing, as recited in 37 C.F.R. § 41.52, begins to run from the “MAIL DATE” (paper delivery mode) or the “NOTIFICATION DATE” (electronic delivery mode) shown on the PTOL-90A cover letter attached to this decision. Appeal 2009-006321 Application 10/672,777 2 STATEMENT OF THE CASE Appellants appeal from the Examiner’s rejection of claims 1-11 under authority of 35 U.S.C. § 134(a). The Board of Patent Appeals and Interferences (BPAI) has jurisdiction under 35 U.S.C. § 6(b). We affirm. Appellants’ Invention The invention at issue on appeal relates to a system and method for performing a real-time Service Level Agreement (SLA) impact analysis. (Spec. ¶¶ [0001], [0010].)3 Representative Claims Independent claims 1, 4, and 11 further illustrate the invention. They read as follows: 1. A method for performing a real-time service level agreement (SLA) impact analysis, the method comprising the steps of: detecting an event arising from a specific resource; determining whether based upon said event said specific resource cannot perform adequately to meet a term within an SLA which directly implicates said specific resource; and, further determining whether based upon said event said specific resource inhibits another resource from performing adequately to meet a term within said SLA which does not directly implicate said specific resource, but directly implicates said another resource. 3 We refer to Appellants’ Specification (“Spec.”) (paragraph numbers refer to published application – Pub. No. 2005/0071458 A1); Appeal Brief (“App. Br.”) filed March 3, 2008; and Reply Brief (“Reply Br.”) filed July 14, 2008. We also refer to the Examiner’s Answer (“Ans.”) mailed May 12, 2008. Appeal 2009-006321 Application 10/672,777 3 4. A system of performing a real-time service level agreement (SLA) impact analysis comprising: a service level manager programmed to establish a plurality of SLAs directly implicating selected resources; a relationship database configured for coupling to a plurality of management applications programmed to manage said selected resources; and, a modeling and evaluation system communicatively coupled to said relationship database and said service level manager and programmed to perform a real-time SLA impact analysis based both upon resources directly implicated by said SLAs and also upon resources which are related to said resources directly implicated by said SLAs. 11. A method for assessing the impact of an indirectly implicated resource within an service level agreement (SLA) in real time, the method comprising the steps of: establishing an SLA directly implicating a performance level for an underlying resource; noting at least one resource upon which said underlying resource depends; receiving an event arising from said at least one resource; determining whether said event affects said underlying resource in meeting said performance level; and, if said event prevents said underlying resource from meeting said performance level, generating a notification specifying an impact of said event upon said SLA. References The Examiner relies on the following references as evidence of unpatentability: Main US 5,893,905 April 13, 1999 Dugan US 2002/0083166 A1 Jun. 27, 2002 Appeal 2009-006321 Application 10/672,777 4 Bartz US 6,701,342 B1 Mar. 2, 2004 (filed Jan. 20, 2000) Barkan US 6,925,493 B1 Aug. 2, 2005 (filed Nov. 17, 2000) Rejections on Appeal The Examiner rejects claim 11 under 35 U.S.C. § 102(e) as being anticipated by Bartz. The Examiner rejects claims 1, 3, 8, and 10 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Main and Bartz. The Examiner rejects claims 2 and 9 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Main, Bartz, and Barkan. The Examiner rejects claim 4 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Main and Barkan. The Examiner rejects claims 5 and 6 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Main, Barkan, and Dugan. The Examiner rejects claim 7 under 35 U.S.C. § 103(a) as being unpatentable over the combination of Main, Barkan, and Bartz. ISSUES Based on our review of the administrative record, Appellants’ contentions, and the Examiner’s findings and conclusions, the pivotal issues before us are as follows: 1. Does the Examiner err in finding the Bartz reference discloses “establishing an SLA directly implicating a performance level for an underlying resource; noting at least one resource upon which said underlying Appeal 2009-006321 Application 10/672,777 5 resource depends; [and] receiving an event arising from said at least one resource” as recited in Appellants’ claim 11? 2. Does the Examiner err in finding the Main and Bartz references would have collectively taught or suggested “detecting an event arising from a specific resource; determining whether based upon said event said specific resource cannot perform adequately to meet a term within an SLA . . . ; and, further determining whether based upon said event said specific resource inhibits another resource” as recited in Appellants’ claim 1? 3. Does the Examiner err in finding the Main and Barkan references would have collectively taught or suggested “a service level manager programmed to establish a plurality of SLAs directly implicating selected resources; . . . a plurality of management applications programmed to manage said selected resources; and, a modeling and evaluation system communicatively coupled to said . . . service level manager and programmed to perform a real-time SLA impact analysis based both upon resources directly implicated by said SLAs and also upon resources which are related to said resources directly implicated by said SLAs” as recited in Appellants’ claim 4? FINDINGS OF FACT (FF) Bartz Reference 1. Bartz describes “a method and apparatus for evaluating Service Level Agreements (SLAs) that describe the level of services that are to be provided to customers.” (Col. 2, ll. 14-17.) Bartz’s method and apparatus “measure the quality of service being provided to customers . . . to determin[e] whether or not the quality of service is in compliance with an Appeal 2009-006321 Application 10/672,777 6 SLA.” (Col. 2, ll. 19-21.) The SLA is a “data structure that includes a logical expression of one or more Service Level Objectives (SLOs).” (Col. 2, ll. 26-27.) Bartz’s method and apparatus “together comprise an SLA management tool that retrieves and evaluates measurement data associated with each SLO to determine whether or not any SLO has been violated. The SLO evaluation results are then utilized in the logical expression of the SLA to determine whether or not the SLA is compliant.” (Col. 2, ll. 28-34.) (See col. 9, l. 30 to col. 12, l. 8; Fig. 6.) 2. Bartz describes a monitoring and measurement tool (Firehunter) that tests resources throughout a computer system or network and acquires diagnostic/measurement data from those resources. The measurement tool captures, aggregates, and correlates service performance metrics allowing assessment of system performance and availability provided to customers. (Col. 3, ll. 7-48.) The system model monitored by the measurement tool “comprises all of the resources end-to-end throughout the network that are utilized to provide services to the customers of the ISP or ESP. These resources include the portions of the backbone of the network that are utilized in providing those services (e.g., routers) as well as various servers (e.g., DNS server, proxy server, web server, DHCP server, etc.) that are utilized in providing services to the customer.” (Col. 3, ll. 20-27.) 3. Bartz describes two SLOs combined into an SLA logical expression. (Col. 9, ll. 30-31.) “SLO1” and “SLO2” correspond to characteristics “associated with a particular resource[s].” (Col. 9, ll. 31-35.) For example, SLO1 may be the throughput or response time of a particular server (Server 1), and SLO2 may be the response time of a different associated server (Server 2). (Col. 9, l. 31 to col. 10, l. 4; col. 10, ll. 24-49; Appeal 2009-006321 Application 10/672,777 7 col. 11, ll. 33-45; Fig. 6.) Bartz determines when a SLO has been violated, for example, if the throughput of Server 1 falls below a threshold (<50 kb/sec for 5 minutes). (Col. 9, l. 30 to col. 12, l. 11; Fig. 6.) Bartz determines SLO violations and SLA compliance using an SLA manager and compliance checker that evaluate quality of service (QoS) data against baseline data. Bartz stores the resulting calculations in an SLA database that can be used to produce SLA reports tailored to a user’s or customer’s needs. (Col. 13, ll. 25-63; Fig. 7.) Main Reference 4. Main describes a system and method for automated performance monitoring of data processing jobs according to Service Level Agreements (SLAs) by comparing actual performance against a SLA, “identify[ing] discrepancies, and analyz[ing] impacts to other jobs in a job stream.” (Col. 3, ll. 33-34; see col. 3, ll. 27-34.) Main explains enterprises use extensive data processing to perform business-critical tasks, such as billing and order entry. The data processing jobs “may be batch jobs that run according to a schedule or other dependency,” and often a business process (task) “is performed by a job stream, and thus, many jobs are dependent on the successful completion of previous jobs.” (Col. 1, ll. 30-33; see col. 1, ll. 23-35.) 5. Main describes a production server operating an automated SLA monitor (ASM) that receives production (mainframe) computer exception data and job performance data from the Unicenter Star console. (Col. 3, l. 35 to col. 4, l. 49; Fig. 2.) Main explains that: a Computer Associates (CA) product known as CA-7 and/or CA-Unicenter operates on the production computer 102. CA-7 Appeal 2009-006321 Application 10/672,777 8 and CA-Unicenter are well known to persons skilled in the relevant art(s) and are used to schedule jobs, collect job performance data and identify job processing exceptions. Other produces having this functionality could alternatively be used. Job performance data includes runtimes and return codes. Exceptions include ABENDs, terminations, and error codes. (Col. 3, ll. 53-61.) Main also describes collecting job exception data in an incorporated reference: Unicenter Star console 104A retrieves job exception data from the production computers’ (102A-102L) CA-7 programs. This data includes job ABENDs, error codes, and terminations. A system and process for retrieving job exception data from production computers and notifying the user is disclosed in a copending application entitled “Integrated Cross-Platform Batch Management System,” application serial number 08/672813, . . . incorporated herein by reference in its entirety. (Col. 5, ll. 29-38.) The incorporated reference (now U.S. Patent No. 5,872,970) describes numerous types of exceptions: Examples of such exceptions include, but are not limited to, late conditions, JCL errors, system ABENDs, user ABENDs, skeleton status, waiting on resources, or return codes not equal to “0”. A late condition exception is a job that appears late at a selected data center. A JCL error results when a job has a JCL conflict or error. System ABENDs are jobs that have abnormally ended due to a system problem. User ABENDs are jobs that have abnormally ended due to a user problem. Skeleton status are jobs that have been scheduled at a data center, but cannot be submitted due to resource issues. Waiting on resources refers to there being a lack of memory (i.e., RAM, disk, tape drive, etc.) and/or initiators to properly execute the job. (Col. 6, ll. 38-50.) Appeal 2009-006321 Application 10/672,777 9 6. Main describes triggering an alert upon receiving an exception, such as an ABEND (col. 7, ll. 37-40), or when the calculated clocktime exceeds an SLA completion time. (Col. 8, l. 55 to col. 9, l. 12; Fig. 5, elements 510, 512, 514, & 516.) For example: In step 514, the job record is tested to determine if the calculated clocktime of all remaining jobs in the SLA jobstream exceeds the SLA completion time. The current system time is used as a base and then the last job in the SLA jobstream that completed is determined. This is determined from the prior run data. Then the remaining clocktimes from the clocktime data are added together to get an estimated completion time. If the estimated completion time exceeds the SLA completion time, an alert is triggered in step 516. (Col. 8, l. 65 to col. 9, l. 6.) Barkan Reference 7. Barkan describes a system enabling service providers (Application Service Providers (ASPs)) to manage Service Level Agreements (SLAs) utilizing the Service Level Agreement Language of Measurement (SLALOM). (Col. 1, ll. 14-26; col. 3, l. 66 to col. 4, l. 60.) Barkan’s system allows ASPs “to define SLAs with their customers.” (Col. 1, l. 21; see col. 4, ll. 14-30.) Barkan also describes a central management tool based on SLALOM that allows the ASP to manage all aspects of service agreements – SLAs – between the ASP and its customers, including monitoring the actual service level delivered, and providing reporting of the actual service level delivered compared with the service level agreed upon in the SLA. (Col. 2, ll. 35-50; col. 4, ll. 11-17, 30-38.) Barkan’s service-level language (SLALOM) contains formulas describing “how to compute some service-level value from measurements collected by the ASP” using “various Appeal 2009-006321 Application 10/672,777 10 tools that measure resources the ASP uses to supply service to its customers.” (Col. 3, ll. 34-39; col. 4, ll. 43-48.) The formulas “can be loaded into the server computer memory, and from there it may collect measurements from measurement tools, and subsequently calculate the service level. The results of these computations can be analyzed, saved and monitored. Furthermore these results can be used to generate various summaries and reports.” (Col. 3, ll. 40-46; col. 4, ll. 49-54.) 8. Barkan’s system includes: a “SLA Manager” that “manages the administrative work of the SLA” (col. 5, ll. 21-23); a “SLA Database” that “contains the information that the SLA Manager uses” including “the SLA definitions that target the amount of service level promised to the customer” (col. 5, ll. 24-29); a “SLA Engine” that processes “the data in the SLA DB 32 and generat[es] maps of the promised service level for a customer or a group of customers” (col. 5, ll. 30-34); a “CSL Engine” that “processes the measurements and events reported by the [monitoring tools] . . . . [This] information is calculated, aggregated and then stored in the CSL [database] reflecting the measured service level actually provided by the ASP at a certain time interval” (col. 5, ll. 35-41); a “CSL DB” or “CSL database” that “contains the Calculated Services Level measurements and events calculated and aggregated by the CSL engine[,] . . . . [and the] “aggregation method, as well as the aggregation time defined in the SLA DB 32 as a part of the formula of the given rule” (col. 53, ll. 42-47); and an “Infrastructure Manager” that “is responsible for holding the information about [t]he map of resources, i.e. what is the role of each resource, where is it connected, and which user/users are influenced by it . . . . [which] allows the system to find the resources that should be monitored for each customer, in order to Appeal 2009-006321 Application 10/672,777 11 compute that customer’s service level” (col. 6, ll. 24-31). (See Col. 5, ll. 21-47; col. 6, ll. 24-31; Figs. 2 and 3.) 9. Barkan also provides a detailed example of an SLA between an ASP and a customer (col. 9, l. 8 to col. 10, l. 54), including an explanation of the information utilized in formulating and monitoring a particular SLA. (Col. 6, l. 38 to col. 10, l. 54.) With respect to this SLA, Barkan describes specific infrastructure or resources – a “mapping of ASP resources” – associated with the SLA terms and monitored by the system to determine the actual service levels provided to the customer. (Col. 7, ll. 29-34.) For example: in the SLA example, Rule (c) details the uptime objective/requirement for a web server, application server, and two routers; Rule (d) describes the applications (resources) provided by the ASP to the customer related to the resources in Rule (c) – in this instance SAP, Excel, and Remedy. (Col. 9, l. 8 to col. 10, l. 54.) ANALYSIS Appellants have elected to separately argue independent claims 1 (App. Br. 10), 4 (App. Br. 14), and 11 (App. Br. 6), and argue claims 3, 8, and 10 as a group based on representative claim 1 (App. Br. 10). Appellants do not separately argue claims 2 and 9 (App. Br. 14), claims 5 and 6 (App. Br. 18-19), or claim 7 (App. Br. 19). Therefore, we select independent claims 1, 4, and 11 as the representative of Appellants’ groupings, and we will address Appellants’ arguments with respect thereto. 37 C.F.R. § 41.37(c)(1)(vii). See In re Nielson, 816 F.2d 1567, 1572 (Fed. Cir. 1987). Appellants have the opportunity on appeal to the Board of Patent Appeals and Interferences (BPAI) to demonstrate error in the Examiner’s Appeal 2009-006321 Application 10/672,777 12 position. See In re Kahn, 441 F.3d 977, 985-86 (Fed. Cir. 2006) (citing In re Rouffet, 149 F.3d 1350, 1355 (Fed. Cir. 1998)). The Examiner sets forth a detailed explanation of a reasoned conclusion of anticipation in the Examiner’s Answer with respect to representative claim 11. (Ans. 3-5, 18- 20.) The Examiner also sets forth a detailed explanation of a reasoned conclusion of obviousness in the Examiner’s Answer with respect to representative claim 1 (Ans. 5-6, 20-22) and representative claim 4 (Ans. 12- 14, 23-25). Therefore, we look to the Appellants’ Briefs to show error in the proffered reasoned conclusions. See Kahn, 441 F.3d at 985-86. We also note that Appellants make numerous confusing and irrelevant arguments directed to the prosecution of their patent application preceding the last office action mailed on October 3, 2007 (“Final Office Action”).4 We interpret Appellants’ arguments as follows. Issue 1: Arguments Concerning the Examiner’s Rejection of Claim 11 under 35 U.S.C. § 102(e) The Examiner rejects claim 11 for being anticipated by the Bartz reference. (Ans. 3-5, 18-20.) Appellants contend that: the Bartz reference “does not identify [a] particular underlying resource” and the Examiner has failed to “identify where Bartz teaches the ‘underlying resource’” (App. Br. 7); “the Examiner has failed to specifically identify, within Bartz, the 4 Appellants make numerous arguments related to office actions preceding the Final Office Action. (App. Br. 5-18.) We remind Appellants that they currently appeal the Examiner’s rejection of claims 1-11 in the Final Office Action. (App. Br. 2.) The rejections in the Final Office Action are the only rejections before the BPAI on appeal. Accordingly, the rejections made in the preceding office actions and any corresponding responses by Appellants are not before the BPAI, and we will not address any arguments related to prosecution preceding the Last Office Action. Appeal 2009-006321 Application 10/672,777 13 features corresponding the claimed ‘at least one resource’ upon which the underlying resource depends” (App. Br. 8); and that “[t]he throughput falling may be considered ‘experiencing an event,’ but the Examiner has not established that one having ordinary skill in the art would consider experiencing an event to identically disclose the claimed receiving an event” (App. Br. 9). The Examiner finds that the Bartz reference discloses the disputed features of an underlying resource and “at least one resource upon which the underlying resource depends,” in that Bartz describes Service Level Objectives (SLOs) for characteristics of a particular resource – for example, two distinct server resources (Server 1 and Server 2) where the characteristics described in the SLOs affect one another, for example that the throughput of Server 1 (a storage server) may affect the response time of Server 2 (a web server) or vice versa. (Ans. 3-4, 18-19.) The Examiner also finds that the Bartz reference discloses the disputed feature of “receiving an event,” as Bartz explicitly describes detecting a SLO violation, for example, throughput falling below a threshold. (Ans. 4, 19-20.) Based on the record before us, we find no error in the Examiner’s anticipation rejection of representative claim 11. After reviewing the record on appeal, we agree with the Examiner that the Bartz reference discloses the disputed limitations of representative claim 11. We begin our analysis by broadly but reasonably construing Appellants’ disputed claim limitations. See In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004); In re Zletz, 893 F.2d 319, 321 (Fed. Cir. 1989). We construe Appellants’ recited resources – Appellants’ “underlying resource” and “at least one resource upon which the underlying resource depends” – as meaning a Appeal 2009-006321 Application 10/672,777 14 source of something useful, such as a source of information or a service provider, or an asset, for example, a server. In doing so, we note that Appellants’ Specification does not limit the meaning of a “resource.” (See Spec. 7-8 & 10.) We also construe Appellants’ claimed “receiving an event” as meaning coming into possession of or detecting an event. We note that Appellants’ Specification does not explicitly define or otherwise limit the meaning of “receiving.” (See Spec. 9, first para.) As detailed in the Findings of Fact section supra, the Bartz reference describes a system for evaluating SLAs by measuring the quality of service (QoS) provided by system components to customers. The SLAs include SLOs, which represent the QoS objectives. (FF 1.) Bartz describes a monitoring and measurement tool that captures data for all of the resources in a network, including various servers, e.g., proxy servers and web servers, utilized to provide services to customers. (FF 2.) Bartz also describes two SLOs (SLO1 and SLO2) that correspond to characteristics of particular resources, e.g., SLO1 may be the throughput or response time of a particular server (Server 1) and SLO2 may be the response time of a different associated server (Server 2). (FF 3.) Bartz further detects and reports when a SLO has been violated, e.g., when the throughput of a server falls below a threshold. (FF 3.) Thus we find that Bartz discloses multiple interrelated (dependent) resources, e.g., a proxy server and a web server, or as explained by the Examiner, a web server and a storage server. Bartz’s SLAs include multiple SLOs. An SLO may directly implicate a particular resource (e.g., the response time of a proxy server) and another SLO may describe characteristics of an interrelated resource (e.g., the response time of a web Appeal 2009-006321 Application 10/672,777 15 server). We find these interrelated resources disclose Appellants’ recited “underlying resource” and “at least one resource upon which the underlying resource depends.” We also find that Bartz discloses detecting and reporting SLO violations, which we find discloses Appellants’ recited “receiving an event.” We find Appellants’ contrary arguments unpersuasive. Specifically, Appellants mischaracterize Bartz as failing to describe the recited resources and receiving events related to the resources. Further, Appellants’ arguments are not commensurate with the scope of their recited claim limitations. Appellants’ disputed claim limitations merely require establishing an SLA for the performance a particular first resource, “noting” a second interrelated resource on which the first resource depends, and receiving an event related to the second resource. Appellants’ claim does not positively recite how the second resource is “noted” or how the event is “received.” We find (supra) that Bartz discloses SLOs describing the performance of at least two interrelated servers, which we interpret as “noting” the second resource. Bartz also detects (and reports) a SLO violation for the second server, which we interpret as “receiving an event arising from” the second server (resource). Thus, we find the Bartz reference discloses Appellants’ disputed claim limitations as recited in Appellants’ independent claim 11. It follows that Appellants do not persuade us of error in the Examiner’s anticipation rejection of claim 11, and we affirm the Examiner’s rejection of this claim. Appeal 2009-006321 Application 10/672,777 16 Issue 2: Arguments Concerning the Examiner’s Rejection of Claims 1, 3, 8, and 10 under 35 U.S.C. § 103(a) The Examiner rejects claim 1 for being obvious in view of the Main and Bartz references. (Ans. 5-6, 20-22.) Appellants elect (supra) claim 1 as representative of claims 3, 8, and 10. Appellants contend that: the Examiner does not establish that the Main reference teaches “an event arising from a specific resource is detected” (App. Br. 11); the Examiner does not establish that the Main reference teaches “‘determining whether based upon said event said specific resource cannot perform adequately’” (App. Br. 11); and the Bartz reference is “silent as to what specific resource is inhibited by the event and the determination of the same” (App. Br. 13). The Examiner finds that the Main reference teaches the disputed features of “detecting an event arising from a specific resource” and “determining whether based upon said event said specific resource cannot perform adequately to meet a term within an SLA.” (Ans. 5.) The Examiner also finds that the Bartz reference teaches the disputed feature of “determining whether based upon said event said specific resource inhibits another resource from performing adequately to meet a term within said SLA which does not directly implicate said specific resource, but directly implicates said another resource.” (Ans. 6.) (See Ans. 5-6, 20-22.) Based on the record before us, we find no error in the Examiner’s obviousness rejection of representative claim 1. We agree with the Examiner that the Main and Bartz references would have collectively taught or suggested the disputed limitations of representative claim 1. As with claim 11 supra, we begin our analysis by construing Appellants’ disputed claim limitations, and we broadly but reasonably construe Appellants’ recited resources – Appellants’ “specific resource” and “another resource” – Appeal 2009-006321 Application 10/672,777 17 as meaning a source of something useful, such as a source of information or a service provider, or an asset, for example, a data processing job performed by a mainframe computer or a computing resource such as CPU time. As detailed in the Findings of Fact section supra, the Main reference describes a system for monitoring performance of data processing jobs according to SLAs by comparing the performance against the SLA, and identifying discrepancies and analyzing impacts to other jobs in a job stream. (FF 4-5.) In particular, Main describes performing business-critical tasks, such as billing and order entry using batch data processing jobs that are performed in a job stream and “that run according to a schedule or other dependency,” and “are dependent on the successful completion of previous jobs.” (FF 4.) Thus, we find that Main teaches specific resources (data processing jobs), and related or dependent resources (subsequent data processing jobs in a job stream). Main also describes that the software operating on the mainframe (CA-7) is “well known to persons skilled in the relevant art(s)” to collect job performance data and identify job processing exceptions such as ABENDs. (FF 5.) Main further discloses numerous exceptions, such as system ABENDs and “wating on resources” (such as lack of memory (RAM, disk or tape drive)) (FF 5), and triggering alerts when an exception is received – for example, an ABEND is received or the calculated clocktime exceeds an SLA completion time (FF 6). As explained by the Examiner, ABENDs are well known in the art, and would have implicated a particular computer resource (such a CPU time or disk storage) to one skilled in the art. (Ans. 20-21.) Additionally, the Main reference teaches that software operating on the mainframe was well known for collecting job performance data and identify job processing Appeal 2009-006321 Application 10/672,777 18 exceptions such as ABENDs or waiting on resources such as memory. Therefore we find Main teaches specific resources (processing jobs and/or computer resources) utilized to perform a task such as billing, and detecting events (triggering alerts – exceptions/ABENDs) related to the resources. We also find that Main teaches determining based upon the alert (event) that the specific resource cannot perform adequately to meet the SLA, in that Main at least explicitly describes calculating a clocktime for the remaining jobs in an SLA jobstream based on the run time data of a last completed data processing job, and determining if the clocktime exceeds the SLA completion time for the jobstream. (See FF 6.) We therefore find that Main would have taught or suggested to a skilled artisan Appellants’ recited features of “detecting an event arising from a specific resource” and “determining whether based upon said event said specific resource cannot perform adequately to meet a term within an SLA.” We find (supra) that Bartz describes (teaches) interrelated resources as well as detecting and reporting SLO violations. We therefore find that Bartz teaches events, specific resources, and related (another) resources. As explained by the Examiner, Bartz’s figure 6 shows: “an SLA violation caused by a combination of two separate events (throughput [<] 50 Kb/sec for 5 minutes and response time > 5 seconds for 2 minutes), wherein the server resource is unable to maintain the response time in part due to throughput from storage devices falling below the specified rate of 50 Kb/sec for 5 minutes.” (Ans. 6.) Thus, we also find that Bartz would have taught or suggested to one skilled in the art determining based upon an event (an SLO violation for a specific resource) that the specific resource inhibits another resource from performing adequately to meet an SLA term (a second Appeal 2009-006321 Application 10/672,777 19 SLO) that does not directly implicate the specific resource. Accordingly, we find Bartz would have taught or suggested to a skilled artisan Appellants’ recited feature of “determining whether based upon said event said specific resource inhibits another resource from performing adequately to meet a term within said SLA which does not directly implicate said specific resource, but directly implicates said another resource.” Appellants’ contrary arguments are unpersuasive. We find (supra) that both Main and Bartz teach a “specific resource,” detecting an event arising from the specific resource, and determining based on the event whether the specific resource violates an SLA term. We also find (supra) that Bartz teaches a second resource (another resource). We further find (supra) that Bartz teaches or suggests the specific resource’s SLO violation inhibiting the second resource (causing at least in part a SLO violation and resulting SLA non-compliance for/by the second resource). Additionally, Appellants do not provide any persuasive evidence or argument supporting their assertions of alleged error in the Examiner’s position, and in particular, traversing the Examiner’s interpretation of Bartz’s figure 6 (see Ans. 6, discussed supra). Instead Appellants merely assert that “the Examiner employed similar analysis as to characterizing the teachings of Bartz when characterizing the teaching of Bartz in the rejection of claim 11 under 35 U.S.C. § 102.” (App. Br. 13; see App. Br. 8 and Reply Br. 3-4, 7.) Even if we assume, arguendo, that Appellants are correct, and the Examiner has failed to show that Bartz explicitly discloses that server 1 violating “SLO 1 affects how server 2 meets SLO 2” (Reply Br. 4), this does not amount to evidence (or even argument) that such a relationship is beyond the understanding or skill of one skilled in the art, i.e., that Bartz would not have Appeal 2009-006321 Application 10/672,777 20 taught or suggested such a relationship to a skilled artisan. See Muniauction, Inc. v. Thomson Corp., 532 F.3d 1318, 1327 (Fed. Cir. 2008); Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1161 (Fed. Cir. 2007) (Relying on the common sense of those skilled in the art, as well as Leapfrog’s failure to present evidence that the modification was beyond the skill of those skilled in the art, the Federal Circuit found a proposed modification to the prior art obvious.) Thus, we find the Main and Bartz references would have taught or suggested to one skilled in the art Appellants’ disputed claim limitations as recited in Appellants’ independent claim 1. Appellants do not separately argue independent claim 8 or dependent claims 3 and 10, and these claims fall with representative claim 1. It follows that Appellants do not persuade us of error in the Examiner’s obviousness rejection of claims 1, 3, 8, and 10, and we affirm the Examiner’s rejection of these claims. Claims 2 and 9 under 35 U.S.C. § 103(a) The Examiner rejects claims 2 and 9 for being obvious in view of the Main, Bartz, and Barkan references. (Ans. 9-12.) Appellants do not separately argue the rejection of claims 2 (dependent on claim 1) and 9 (dependent on claim 8). Rather, Appellants simply reiterate their arguments made with respect to claim 1 (supra). (App. Br. 14.) Based on the record before us, we find no error in the Examiner’s obviousness rejection of claims 2 and 9 for the reasons set out with respect to claim 1 (supra). Accordingly, we also affirm the Examiner’s rejection of claims 2 and 9. Appeal 2009-006321 Application 10/672,777 21 Issue 3: Arguments Concerning the Examiner’s Rejection of Claim 4 under 35 U.S.C. § 103(a) The Examiner rejects claim 4 for being obvious in view of the Main and Barkan references. (Ans. 12-14, 23-25.) Appellants contend that Barkan does not teach: (1) a service level manager programmed to establish a plurality of SLAs directly implicating selected resources (App. Br. 14-15); (2) a plurality of management applications programmed to manage the selected resources (App. Br. 16); (3) a modeling and evaluation system (App. Br. 16-17); or (4) the system performing a real-time SLA impact analysis based both upon resources directly implicated by the SLAs and related resources (App. Br. 18). The Examiner finds that the Main reference generally teaches a system for performing a real-time SLA impact analysis. (Ans. 12.) The Examiner finds that the Barkan reference describes an SLA Manager, an SLA DB (database), and an SLA Engine, which teach the disputed features of a system with a service level manager programmed to establish a plurality of SLAs directly implicating selected resources. (Ans. 13, 23-24.) The Examiner also finds that the Barkan reference describes an Infrastructure Manager that “stores the information about the map of resources” (Ans. 13) in an Infrastructure DB (database), as well as “the various functions that the Infrastructure Manager 24 is responsible for” (Ans. 24), and this disclosure teaches the disputed feature of a plurality of management applications programmed to manage the selected resources. (Ans. 13, 24.) The Examiner further finds that the Barkan reference describes a CSL Engine together with the SLA engine that function “as a modeling and evaluation system” (Ans. 13), and which teach the disputed features of a modeling and evaluation system programmed to perform a real-time SLA impact analysis Appeal 2009-006321 Application 10/672,777 22 based both upon resources directly implicated by the SLAs and also related resources. (Ans. 13-14, 24-25.) Based on the record before us, we find no error in the Examiner’s obviousness rejection of representative claim 4. We agree with the Examiner that the Main and Barkan references would have collectively taught or suggested the disputed limitations of representative claim 4 for essentially the reasons set forth by the Examiner supra (see Ans. 12-14, 23- 25). We additionally note, as detailed in the Findings of Fact section supra, the Barkan reference teaches a system for ASPs to manage SLAs, which includes a SLA Manager, a SLA Database including SLA definitions, and a SLA Engine that processes the SLA database data and generates a map of the SLA service level (i.e., the role of each resource in the SLA objective and its relationships with other resources and/or users). (FF 7-8.) Barkan also teaches an Infrastructure Manager that retrieves and stores information about the resources referenced by the in the SLA, e.g., the role of each resource, its relationship or connection with other resources and/or applications, and the users affected by the resource, which allows the system to locate the resources and determine the resources that require monitoring to compute each customer’s service level (i.e., performing/providing resource management applications). (FF 8.) Barkan further provides a detailed SLA example that teaches SLA rules (objectives), including: an uptime requirement for a web server, application server, and two routers – which we broadly but reasonably construe to be “selected resources” directly implicated in an SLA; and related applications such as SAP, Excel, and Remedy– which we broadly but reasonably construe to be “resources which are related to said resources directly implicated by said SLAs.” (FF 9.) Appeal 2009-006321 Application 10/672,777 23 Thus, we find (supra) the Main reference teaches a system for performing real-time SLA impact analysis, and the Barkan reference would have taught or suggested to one skilled in the art the particular features of the system. Specifically, we find Barkan would have taught or suggested to one skilled in the art Appellants’ disputed features of: (1) a service level manager – taught by Barkan’s SLA Manager and related components – programmed to establish a plurality of SLAs directly implicating selected resources; (2) a plurality of management applications – taught by Barkan’s Infrastructure Manager – programmed to manage the selected resources; (3) a modeling and evaluation system – taught by Barkan’s SLA Engine and CSL Engine; and (4) that performs a real-time SLA impact analysis based both upon resources directly implicated by the SLAs – e.g., an application server – and related resources – e.g., related applications such as SAP. Appellants do not separately argue dependent claims 5-7 and these claims fall with representative claim 4 (supra). It follows that Appellants do not persuade us of error in the Examiner’s obviousness rejection of claims 4- 7, and we affirm the Examiner’s rejection of these claims. CONCLUSIONS OF LAW Appellants have not shown that the Examiner erred in rejecting claim 11 under 35 U.S.C. § 102(e). Appellants have not shown that the Examiner erred in rejecting claims 1-10 under 35 U.S.C. § 103(a). Appeal 2009-006321 Application 10/672,777 24 DECISION We affirm the Examiner’s rejection of claim 11 under 35 U.S.C. § 102(e). We affirm the Examiner’s rejection of claims 1-10 under 35 U.S.C. § 103(a). No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED msc Carey, Rodriguez, Greenberg & Paul, LLP Steven M. Greenberg 950 Peninsula Corporate Circle Suite 3020 Boca Raton, FL 33487 Copy with citationCopy as parenthetical citation